So, what data is to innovation more generally, it’s data that is at the core of artificial intelligence (AI) innovations in particular. Staying competitive, Microsoft has modernised its offerings leaving no stone unturned, especially its employment-oriented social network LinkedIn. This piece will explore how is LinkedIn’s ‘Data for Generative AI Improvement’ a testimony of using user data to refine AI models, how data privacy settings unfold, and the consequences in the EU.
LinkedIn signalling that it would use user data to further improve Generative AI illustrates a new step in the refinement of AI technologies: mining the more than 900 million engagements a week to improve its AI models. This could be the sharpest of knives: not only is there a privacy issue involved, but the feature is on by default.
While this might ease the anxieties of users who dislike the idea of their personal information being used as training material by AI, LinkedIn members can control this directly via their Data Privacy settings. Such control is an example of the kind of compromise Microsoft might try to strike between the need for innovation on the one hand and the respect for user choice on the other. The irony is that the feature is switched on by default, raising significant questions about consent in using data for AI purposes.
Because the EU, the EEA and Switzerland fortress states with strict data protection laws stand well outside the purview of LinkedIn’s new feature radar, such global variation in the legal norms of data privacy really stands out. What is more, it remains to be seen whether such AI-assisted models are at all possible in regions that enjoy robust data protection legislation.
This unrelenting data-hunger has resulted in MICROSOFT finding itself in hot water over yet another OpenAI project. Several companies are now suing OpenAI, also partly backed by MICROSOFT, for copyright infringement. These lawsuits highlight the tension between using data scraped from the web as training material for AI models and respecting existing copyright laws.
Beneath the legal wrangling and privacy considerations, the crux of the argument is user consent. The fact that LinkedIn’s data for AI improvement feature was controversially turned on by default is emblematic of a wider industry practice of assuming user consent. It’s a practice at odds with the principle of user permission.
That data can be used to train AI not just on public posts or articles but across the full spectrum of your interaction with the site. The wide-ranging nature of the data collection used to train AI exaggerates the privacy risks of the too-eager-to-please user.
For users seeking to dissociate their data from AI training modules, LinkedIn’s privacy settings provide the exit route. What follows is a manual for disabling the Data for Generative AI Improvement functionality, a pre-requisite for those championing personal data sovereignty.
MICROSOFT’s finding its way via LinkedIn’s new feature helps us recall that, one way or another, the tech giant is aiming to be a leader of AI innovation. This path forward is a treacherous one, and includes battles in the courtroom over who has the right to use AI, as well as ethically-charged questions about user consent and the protection of private information.
Fundamentally, they see a future for AI and machine learning as being inherent and essential, and as something that will help societies progress. In these and other ventures, such as LinkedIn’s new Data for Generative AI Improvement feature, MICROSOFT is setting itself apart in an increasingly contested space of AI. The details of this formation won’t ever be easy, as this article has illustrated, with more factors coming into play, including the politics of user consent and data privacy, as well as further innovations in AI itself.
Ultimately, the rise of a new AI of recognition for LinkedIn shows MICROSOFT as a case study of the limits and opportunities of data mining as a way to advance AI. As humanity wavers dangerously on the brink of AI’s impending boom and bust, MICROSOFT’s central role calls attention to the long arc reliant on a careful balancing act between human rights of privacy and corporate interests of data. MICROSOFT’s path in the years ahead will be a story to watch as to how their successes and failures in these dimensions affects the future of AI.
© 2024 UC Technology Inc . All Rights Reserved.