Why artificial intelligence may go too far for the media
AI has enormous potential to transform the way media organizations receive, edit and distribute content.
In this digital age, new technologies offer us the ability to make what was previously considered impossible a reality. From virtual assistants on our smartphones and automated transactions in financial services to applications in the defense space, artificial intelligence (AI) plays a critical role in creating more efficient, effective and creative workflows across industries.
According to the World Economic Forum, the AI sector itself will grow by 50% each year until 2025, at which point it is projected to reach an incredible value of $127 billion. This will have a visible effect on the media and entertainment sector. According to PwC, AI will contribute about $150 billion a year to this sector.
AI has enormous potential to transform the way media organizations receive, edit and distribute content. AI can boost efficiency, consistency and add value where it is not present. In short, it allows you to combine good content and high personalization to create a more fluid and meaningful information experience for audiences around the world.
Since it seems impossible to imagine a future without AI, it is crucial to use it well, starting with the ethical issue. The race to develop or adopt the latest AI technologies is rapidly accelerating across all touchpoints, from content creation to consumption. This translates into the growing responsibility of users and providers regarding their ethical impact.
How can we ensure that the use of AI benefits everyone, from creators to consumers, while still respecting today's journalistic principles and values?
The fight against fake news
According to recent Reuters research, almost three-quarters of media outlets are now interested in how AI can help them create and distribute content more efficiently through applications such as voice recognition, facial detection or editing corrections. Media outlets like Bloomberg already rely on automation to cover “economic news” (e.g. financial market reports) to save journalists time. By 2027, it is expected that 90% of all articles in the world will be written by AI.
The potential for AI to positively transform news and content production is evident, and over the coming years it will play a crucial role in determining the content we watch and read in our daily lives. But what level of power and control should we give to AI? While “thinking” technology is rapidly gaining utility, it cannot rely on absolute freedom and must adhere to some form of ethical principles. This is especially important in the fight against fake news or fake news.
Machine learning (ML), which is defined as the science that allows computers to learn from data, identify patterns and make decisions with minimal human intervention, is essential for AI to fight fake news. The idea is that machines improve their performance over time and progressively become autonomous. So it's no surprise that AI is being used to automatically generate and curate stories.
However, before reaching this point, ML algorithms must be trained and programmed by humans to improve the accuracy of AI. This is very important, because machines without human intervention lack basic human skills such as common sense, understanding and the ability to contextualize, which causes many difficulties when correctly determining whether content is truthful or not. If a media outlet lets AI take its course without any human intervention (such as contextualization), it risks blurring the line between information and opinion, potentially encouraging fake news rather than fighting it.
Héctor Sierra, Sales Manager of Sony Europe, comments that "the potential of AI in any field, including the media, is incalculable. Even so, it is necessary to keep in mind that every technological advance entails a series of risks, and therefore it is of crucial importance to maintain exhaustive control of its implementation and work constantly for its correct development."
Personalization without bubble filters
Content personalization can create higher-quality experiences for consumers, as we've already seen with streaming services like Netflix, by creating recommendations based on personal behavior and viewing history. Media is no exception and is already using AI to meet the demand for personalization.
For example, the relatively recent James recommendation service, developed by The Times and Sunday Times for News UK, will learn from individual preferences and automatically personalize each edition (by format, time and frequency). In essence, its algorithms will be programmed by humans, but they will improve over time thanks to the computer itself searching for a series of agreed upon results.
While algorithmic filtering (automatic selection of what content should and should not be shown to users and how it is presented) meets consumer demand for personalization, it can also go too far. What if consumers only listen to and read the news they want to hear instead of what is really happening around them and in the world?
This is what is known as the “filter bubble” problem: algorithms designed by platforms to maintain users' interest can lead them to see only content that legitimizes their beliefs and opinions. It is the media's responsibility to find the balance between providing consumers with content tailored to their personal interests and needs and ensuring they continue to be exposed to both sides of the story.
AI is undoubtedly a promising technology, but it is not without its lights and shadows. We must ensure that the use of AI benefits everyone, from content creators to consumers, but must be done in an ethical way that is in line with journalistic principles and values. To do this, we need the media to take into account and prioritize ethics in the implementation of AI. That is, humans must take the necessary steps to ensure that AI is used for the right reasons and that, if it is used, controls and ethics are respected, from proper training to transparent data collection. Otherwise, in the long term, the use of AI can cause more problems than benefits for the media.
Stuart Almond
Intelligent Media Service
Sony Solutions Europe
Did you like this article?
Subscribe to our NEWSLETTER and you won't miss anything.

















