In the AI era, Microsoft is dedicated to increasing trust and preserving privacy. The company is putting in place technological and organisational safeguards to ensure data security and protection in all of its AI systems. In a blog post this week, Julie Brill, Microsoft's Chief Privacy Officer and Vice President Global Privacy and Regulatory Affairs, confirmed that the tech giant's approach to safeguarding privacy in AI includes security, transparency, user control, and continued compliance with data protection requirements.
The Privacy in AI (PAI) group at Microsoft Research has been investigating issues of privacy-preserving Machine Learning and Federated Learning, including privacy threats, privacy metrics, and privacy mitigations in machine learning.
Microsoft has established six fundamental principles for responsible AI, including privacy and security, that are thought to be critical for developing responsible and trustworthy AI. According to the tech firm, its responsible and trusted AI principles are governed by ethical and explainability issues, and it emphasises the primacy of privacy and security in all AI systems.
This comes as ChatGPT-maker OpenAI unveiled a framework to address safety concerns in its most advanced models, including giving safety veto power to the board to reverse decisions by leadership. Microsoft is considered the largest investor in OpenAI, owning 49% of the for-profit subsidiary. The partnership is now under anti-trust scrutiny by the UK regulator and the US Federal Trade Commission. Recently, OpenAI and Microsoft were hit by multiple lawsuits in the US over the use of copyrighted work to train their AI models. OpenAI is also under investigation by the Polish watchdog following a complaint that its ChatGPT breaks EU data protection laws.