Dubai | Artificial Intelligence Journalism
Sara Mohammad Ali
Have you ever imagined writing a few keywords for what you are thinking of and in seconds you saw a complete text written in front of you? Actually, nowadays, fantasy is becoming reality, with Artificial Intelligence large language models that can easily generate their own texts. In fact, knowing that teaching computers how to converse is extremely difficult, Artificial Intelligence large language models have made huge steps in generating texts in the past few years. We are talking about collaboration and partnership between humans and machines. In this article, you will learn about how these programs work? What are their problems and what challenges they may face? What are the ethical considerations of AI? And who’s the winner in the battle between the journalist and the machine?
What is the power of these systems? And why are they improving so much in the past few years?
Technicians are adopting new systems that can understand the relation between words called “Artificial Intelligence large language models”. And big tech companies like google, Microsoft and Facebook are building them.
AI language models analyze massive data writing. They study what words tend to appear next to each other and based on the strength relation between words. Consequently AI can generate its own text.
How does AI language models work?
Simply, it works like a word predictor on your phone. GPT-3 for example scales up this feature one thousand times compared to phones. It can not only generate the next word but it can make a whole paragraph or a story, which actually make sense and also can be very interesting.
Nowadays, many companies are using Artificial Intelligence large language models to do many tasks according to an interview with Samanyou Garg, founder of Writesonic in The Vergecast podcast:
- Writing products descriptions: Shopping sites use it to describe products by using “product description tools”.
- Writing blog posts and articles: You have only to tell the computer about the topic and then AI will make 4 main steps:
- Based on the topic title, the AI will generate the introduction.
- Will suggest the outlines.
- Will generate all the sections in the article
- Write the content and assemble it all into a large article.
AI makes up to 70% of the work, as a draft, and then the writers alternate their own changes and edits.
An AI language model wrote an article on his own
To better explain the idea, I will tell you about the Guardian’s experience in writing a full article based on AI language model “GPT-3”:
GPT-3 was given these instructions: “Please write a short op-ed around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI.” The prompts were written by the Guardian, and fed to GPT-3 by Liam Porr, a computer science undergraduate student at UC Berkeley. GPT-3 produced eight different outputs, or essays. Each was unique, interesting and advanced a different argument. The Guardian could have just run one of the essays in its entirety. However, they chose instead to pick the best parts of each, in order to capture the different styles and registers of the AI. Editing GPT-3’s op-ed was no different to editing a human op-ed. They cut lines and paragraphs, and rearranged the order of them in some places. Overall, it took less time to edit than many human op-eds.
In addition, by using this technique a scriptwriter for example can also give a bunch of ideas and AI will suggest to him what he should do, it’s a collaborative partnership between them. The writer will edit, refund, organize, choose, change his ideas based on the AI suggestions.
AI SYSTEMS vs WRITERS: who’s the winner?
Will those machines still need our input or are we done here?
Opinions are divided, with two broad camps – those who fear the “robots” will take over and lead to job losses, and those who believe AI will create new opportunities.
This second camp talks about a kind of collaboration between the two. And it’s actually depending on what kind of news you’re working on. If you work in the news industry, most probably AI will help you in research and finding resources. But where? Simply in the old existing news. AI TILL NOW can’t create new information. Yes it can help, but it cannot do it all. ONLY for the moment at least!
Talking about a winner is still early, so why not focus our efforts on the advantage of cooperation between the two?
Problems and Challenges
Digital software that “thinks” is increasingly useful, but does not necessarily collect or process information in an ethical manner. When using AI to enhance a journalistic work, you should think that it fits the values and ethics like Sexual, religious, ethnic and other kinds of discrimination.
Journalists should be aware that algorithms can be false or misleading. In the end, AI software is programmed by humans with their own biases, which can lead to wrong conclusions. Especially when distinguishing between fact and fiction, AI gives us probability which could be wrong in certain values.
1- Credibility and transparency :
“Perhaps the biggest hurdle to machine intelligence entering newsrooms is transparency,” says Nausica Renner, editor at the Columbia Journalism Review.” Transparency, one of the core values of journalism, often conflicts with artificial intelligence, which often works behind the scenes.”
Media must let their audience know what personal data they are collecting if they really want to maintain their credibility. Although powerful new tools allow editors to satisfy their audiences, they must also strive to inform users of what they do not want to know.
Likewise, reporters must do their best to explain how they use algorithms to find patterns or process evidence for their story.
2- Uncertainty and Criticism
When journalists write stories on a model’s uncertainty, they’re demonstrating that the data and the fitted model are intrinsically “noisy”. And that the model can’t fully explain all of the data. They also let their audience know that they are aware of their analysis’ limitations and constraints. Effectively, communicating uncertainty is a difficult task in and of itself, but it becomes even more difficult when it comes to investigations. Some types of reporting, for example, require journalists to be confident of wrongdoing before publishing a story about a crime. A machine learning prediction is unlikely to ever attain that criterion, that level of assurance.