Why artificial intelligence can go too far for the media
IA has enormous potential to transform the way in which media organizations receive, edit and distribute the content.
In this digital era, new technologies offer us the ability to realize what was previously considered impossible. From virtual assistants on our smartphones and automatic transactions in financial services to applications in the defense space, artificial intelligence (AI) plays a fundamental role in the creation of more efficient, effective and creative workflows in all sectors.
According to him World Economic Forum, the AI sector will grow 50 % every year until 2025, at which time it is expected to reach the incredible value of 127 billion dollars. This will have a visible effect on the media and entertainment sector. According to PwC, AI will contribute to this sector about 150 billion dollars a year.
IA has enormous potential to transform the way in which media organizations receive, edit and distribute the content. AI can enhance efficiency, uniformity and add value in those places where it is not present. In short, it allows combining good content and high customization to create a more fluid and significant informative experience for audiences around the world.
Since it seems impossible to imagine a future without ia, it is crucial to use it well, starting with the ethical issue. The career to develop or adopt the latest technologies of AI accelerates rapidly at all contact points, from content creation to consumption. This translates into the growing responsibility of users and suppliers on their ethical impact.
How can we guarantee that the use of AI benefits everyone, from creators to consumers, while respecting today's journalistic principles and values?
The fight against fake news
According to a recent Reuters investigation, almost three quarters of the media are currently interested in how AI can help them create and distribute content more efficiently through applications such as voice recognition, facial detection or editing corrections. Media such as Bloomberg already depend on automation to cover "economic news" (for example, financial market reports) to save journalists. It is expected that in 2027 90 % of all articles in the world will be written by AI.
The potential of the AI to positively transform the production of news and content is evident, and during the next few years it will play a crucial role to determine the content we see and read in our day to day. But what level of power and control should we grant to AI? While the technology that "thinks" is gaining utility quickly, it cannot count on absolute freedom and must adhere to some kind of ethical principles. This is especially important in the fight against false news or fake news.
Automatic learning (AA), which is defined as the science that allows computers to learn from data, identify patterns and make decisions with minimal human intervention, is essential for AI to fight against false news. The idea is that machines improve their performance over time and become autonomous progressively. Therefore, it is not a surprise that AI is being used to generate and select stories automatically.
However, before reaching this point, AA algorithms must be trained and programmed by humans to improve the accuracy of AI. This is very important, because machines without human intervention lack basic human skills such as common sense, understanding and the ability to contextualize, which causes many difficulties when determining correctly whether or not a content is true. If a means of communication lets the AI continue its course without any human intervention (such as contextualization), it risks blurring the line between information and opinion, so that it would potentially encourage false news instead of fighting them.
Héctor Sierra, Sales Manager of Sony Europe, comments that the “the potential of AI in any field, including that of the media, is incalculable. Even so, it is necessary to take into account that every technological advance entails a series of risks, and therefore it is crucial to maintain an exhaustive control of its implementation and work constantly for its correct development”.
Personalization without bubble filters
Content customization can create higher quality experiences for consumers, as we have already seen in transmission services such as Netflix, through the creation of behavioral recommendations and personal visualization history. The media are no exception and are already using AI to meet customization demand.
For example, the James Recommendations service, relatively recent and developed by The Times and Sunday Times for News UK, will learn from individual preferences and automatically personalize each edition (by format, time and frequency). In essence, their algorithms will be scheduled by humans, but they will improve over time thanks to the computer itself will seek a series of agreed results.
While the filtering by algorithms (automatic selection of the content that should be displayed to users and how it is presented) satisfies the demand for customization of the consumer, it can also go too far. What if consumers only listen and read the news they want to hear instead of what really happens around and in the world?
This is what is known as the problem of "bubble filter": algorithms designed by platforms to maintain user interest can lead them to see only the content that legitimizes their beliefs and opinions. It is the responsibility of the media to find the balance between providing consumers with personalized content according to their personal interests and needs and ensure that both versions of history continue to be exposed.
IA is undoubtedly a promising technology, but it does not stop having its lights and shadows. We must ensure that the use of AI benefits all, from content creators to consumers, but must be done in an ethical way that is in line with journalistic principles and values. For this we need the media to take into account and prioritize ethics in the implementation of AI. That is, humans must take the necessary measures to ensure that AI is used for the appropriate reasons and that, if used, controls and ethics are respected, from a correct training to a transparent data collection. Otherwise, in the long term, the use of AI can generate more problems than benefits to the media.
Stuart Almond
Intelligent Media Service
Sony Solutions Europe
Did you like this article?
Subscribe to our NEWSLETTER and you won't miss a thing.
















