Artificial intelligence (AI) makes it very tempting to forgo the traditional translation process. After all, hiring a professional costs money and using AI is free. And it can produce a perfect, human-like result. It’s a no-brainer then, isn’t it?
Well, to quote Spider-Man, 'not so fast, Osborne!'
Is it me, or is someone tripping here?
Ever heard of a hallucinating machine? Well, AI is very much capable of that, which makes it quite human, right? Perhaps, but you wouldn’t want anyone to work on your project under the influence of hallucinogenic substances, would you?
According to IBM, 'AI hallucinations are when a large language model (LLM) perceives patterns or objects that are non-existent, creating nonsensical or inaccurate outputs.' This means that AI could introduce utter nonsense to your text. On the surface the output as a whole will make sense so your team might overlook that. But someone out there will spot this error and share their findings with the rest of the world. And suddenly you’re the butt of the joke!
This might seem like not a big deal, but in some contexts, it could even be life-threatening. Or at least cost you a pretty penny. And your reputation. Remember that failed Glasgow Willy Wonka Experience that was advertised with the help of AI? Or maybe you’ve seen a recent LinkedIn post from the Edinburgh-based interpreter Jonathan Downie? Seems that a spelling bee would be too much to handle for AI...
These Romans are crazy! – and so is AI translation
Asterix and Obelix wouldn’t make it far as international communicators, would they? I know, I know – they were being colonised so it’s only fair they weren’t too keen on Roman culture. But this is not Ancient Rome and we are not fighting off our enemies. We are attempting effective communication with international clients. And to do so, we need a big dollop of cultural awareness.
Here are a few examples: AI translating an English text into Polish or French might not know when to use an official form or address in those languages. A machine translating marketing materials for a wedding show might not understand that referencing white dresses in China is inappropriate. And I’m pretty sure if AI was translating a Polish text featuring number 2137, it would not pick up the reference to John Paul II....
Want this translated? Roll with butter!
AI may be intelligent, but it’s still artificial – hey, it’s in the name! I did a little experiment into this using ChatGPT. I found that AI struggles to analyse a text with idioms and proverbs.
Although at times ChatGPT managed to translate idioms, the output was unreliable. Sometimes it provided a literal translation. Some answers featured a correct equivalent of the idiom. This is dangerous, as getting some accurate responses can give us a false sense of security. And then when something gets mistranslated or translated literally, we might overlook it.
There are boy jobs and girl jobs
I never thought I’d be quoting Theresa May on my website, but here we are... AI and machine translation are sexist and biased. A few years ago, we all talked about it in the context of Google Translate assigning masculine pronouns to sentences written in Finnish that were to do with highly professional context, and feminine ones to those describing domestic chores.
I’ve taken ChatGPT for a spin round Polish and at first I was disappointed. My assumption was wrong: AI can provide nuanced replies when it comes to gender. Well, NOT SO FAST, OSBORNE!
Yes, it did give me both masculine and feminine forms when I asked for the translation of words ‘nurse’ and ‘doctor’. But when I used both in a sentence, the doctor instantly became male and the nurse female.
But there's more. According to ChatGPT, the terms 'president' and 'prime minister' don’t have feminine versions. And the gender-neutral 'spouse' is translated into a masculine form. As an inclusive translator, I do not like this one bit!
You wouldn’t steal a car
Remember those anti-theft DVD adverts (which, interestingly, were viewed by people who did obtain the said DVD in a legal manner)? These were used to spread awareness of copyright and intellectual property. But did you know that by using AI you’re extremely likely to infringe those laws?
AI is being trained by the input inserted by users and by materials available online. But the fact that a text or part of it has been published online doesn’t mean that anyone can just reproduce it in their own work. There’s even a fancy word for it: stealing.
You’re most likely not even aware that your latest super-creative-fancy-as-hell slogan that AI helped you come up with (let’s be real, it did the heavy lifting) had been actually used by competition years ago. And is subject to copyright. And you’re breaking the law. And… you get the point.
Hold my drink (with a paper straw)!
AI may be free to use, but it costs the Earth. According to MIT Technology Review, training a single AI model can emit as much carbon as five cars in their lifetimes. Isn’t it ironic how companies constantly compete against each other in the ‘sustainability Olympics’ but will turn to AI without considering the planet? Think about that next time you’re proudly slurping your vegan milkshake through a paper straw.
But sustainability is not the only thing to worry about when it comes to AI. Human rights are also to consider. According to Amnesty International, 'From predictive policing tools, to automated systems used in public sector decision-making to determine who can access healthcare and social assistance, to monitoring the movement of migrants and refugees, AI has flagrantly and consistently undermined the human rights of the most marginalised in society. Other forms of AI, such as fraud detection algorithms, have also disproportionately impacted ethnic minorities, who have endured devastating financial problems as Amnesty International has already documented, while facial recognition technology has been used by the police and security forces to target racialised communities and entrench Israel’s system of apartheid.'
Ugh, stop being so dramatic, Alicja!
I understand that technological progress is impossible to halt, and I’m not proposing to completely stop using AI. But we need to be aware of the fact that AI won’t solve every issue and can in fact bring us new ones, too. Especially since even people who develop AI don’t know how it really works.
Comments