Recently, LinkedIn faced backlash for using user profile pictures without explicit consent for AI model training opt-in. The company is violating data privacy expectations, causing concern among users and privacy activists, and potentially causing negative consequences.
The Controversy
LinkedIn updated its terms of service, which came into effect on September 5, 2024. LinkedIn now permits the use of user profile pictures, even those set private, for AI model training opt-in. This could turn out as using facial recognition technology on users’ images to extract information without users’ consent or knowledge.
LinkedIn tells users that it will use the data collected to design new enhancements for its products and services. However, many are concerned over the potential ramifications this may bring. Some are rightly apprehensive about pictures they do not permit to be used to create a deepfake or other forms of synthetic media. Others worry that LinkedIn may sell it surreptitiously to third-party consumers.
Privacy Advocates Weigh In
Privacy advocates say it goes against the basic principles of data privacy. They argue it violates their right to control their data by making their images available for use without explicit opt-in.
Since the move, they have argued that it is necessary for us to take all the steps required to safeguard users’ privacy. The company further clarified that only de-identified data will be used in AI model training opt-in. Hence, users still have the option to deny the company’s use of their images in such training.
The Future Of AI And Privacy
The dispute over LinkedIn’s use of user profile pictures in AI training reflects growing tension between AI development and data privacy. The daily demand for control of AI technologies is increasing due to potential violations of user privacy in development and usage.
How this will be resolved remains to be known. Possibly, LinkedIn will not have a choice but to alter its terms of service after all this ruckus. It could be something that heralds more grand discussions regarding the ethical and moral implications of AI and data privacy.
The development by LinkedIn of using user profile pictures without explicit opt-in for training AI models has created a major privacy backlash. The company has disclosed its case, raising concerns among users and privacy advocates about the potential consequences of such actions. The move depicts growing tension between AI development and data privacy, hence raising significant questions on the future of AI in society.