Secure Federated Learning Architectures for Privacy- Preserving AI Enhancements in Meeting Tools
Main Article Content
Abstract
The speed at which AI-powered meeting services (including real-time transcription and translation, automated
summarization and action item extraction, etc.) get used has increased the anxieties regarding the privacy and security
of information. The conventional centralized learning models expose sensitive information to possible infiltration thus
they cannot be used in collaborative and corporate communication environments. In order to overcome this issue, secure
federated learning (FL) architectures provide a decentralized paradigm that allows training models on distributed user
devices without transfer of raw data. This paper examines how more sophisticated cryptographic methods, including
secure aggregation, homomorphic encryption, and differential privacy can be used to make FL-based meeting systems
more resistant to inference and adversarial attacks. The meeting tools are suggested to be enhanced privacy-preserving
AI, with the framework aimed at the scalability, adaptability in real-time, and adherence to international data protection
regulations. Experimental analyses indicate that secure FL is capable of delivering almost centralized performance and
guaranteeing confidentiality, trust, and resilience in multi-user communication environments. The results highlight the
transforming nature of secure FL systems in the future of privacy-aware smart meeting technologies.