ResilientFL '21: Proceedings of the First Workshop on Systems Challenges in Reliable and Secure Federated Learning

Full Citation in the ACM Digital Library

FedScale: Benchmarking Model and System Performance of Federated Learning

Redundancy in cost functions for Byzantine fault-tolerant federated learning

Federated learning has gained significant attention in recent years owing to the development of hardware and rapid growth in data collection. However, its ability to incorporate a large number of participating agents with various data sources makes federated learning susceptible to adversarial agents. This paper summarizes our recent results on server-based Byzantine fault-tolerant distributed optimization with applicability to resilience in federated learning. Specifically, we characterize redundancies in agents' cost functions that are necessary and sufficient for provable Byzantine resilience in distributed optimization. We discuss the implications of these results in the context of federated learning.

Towards an Efficient System for Differentially-private, Cross-device Federated Learning

This paper describes fresh ideas for a new system for federated averaging that aims to efficiently provide differential privacy even when a fraction of devices is malicious and there is no trusted core.

GradSec: a TEE-based Scheme Against Federated Learning Inference Attacks

Community-Structured Decentralized Learning for Resilient EI

Separation of Powers in Federated Learning (Poster Paper)

In federated learning (FL), model updates from mutually distrusting parties are aggregated in a centralized fusion server. The concentration of model updates simplifies FL's model building process, but might lead to unforeseeable information leakage. This problem has become acute due to recent FL attacks that can reconstruct large fractions of training data from ostensibly "sanitized" model updates.

In this paper, we re-examine the current design of FL systems under the new security model of reconstruction attacks. To break down information concentration, we build TRUDA, a new cross-silo FL system, employing a trustworthy and decentralized aggregation architecture. Based on the unique computational properties of model-fusion algorithms, we disassemble all exchanged model updates at the parameter-granularity and re-stitch them to form random partitions designated for multiple hardware-protected aggregators. Thus, each aggregator only has a fragmentary and shuffled view of model updates and is oblivious to the model architecture. The deployed security mechanisms in TRUDA can effectively mitigate training data reconstruction attacks, while still preserving the accuracy of trained models and keeping performance overheads low.