For the online learning world

CoursesElearning WorldGeneral

Can AI lie?


We live and exist in a technological world. Gone are the days where we depend upon the direct word of another for our information. Progressively in the last and present centuries, the speed of change of communication technologies has rapidly diversified to the extent that we are able to get the point of view of another from almost any part of the world. We are no longer restricted* to singular sources of information but can pick and choose what we wish. But what is the real truth and if its not, then can we tell if we are only interacting with a machine? Thus, can Artificial Intelligence lie?

*Depending on a whole range of factors beyond the scope of this post and clear to understand what they could be.


I am independent from any organisations mentioned and am in no way talking or writing for or endorsed by them.

Names / logos can be trademarks of their respective owners. Please review their websites for details.


What is lying?

Before we can answer the question ‘Can AI lie?’, we have to define what an actual ‘lie’ is. To me this is to state information or present ‘facts’ that you knowingly understand to be ‘false’. Unlike the term ‘white lie’ where something is false or inaccurate but you genuinely think it is ‘fact’ and truthful.

Thus we have three states: Truth, False truth and Lie.

The machine

A machine, and in this context a ‘Computer’ will blindly undertake the instructions that you give it, regardless. It does not know if something is accurate or not, it just proceeds regardless. It is the human that programs the computer to factor in logic that tests the validity of the data being processed. And as humans make mistakes, that’s why computers have bugs.

Therefore the words on the screen will be what a human has instructed the machine to display. Those words could represent any of the three states, but the computer won’t know which one.

Artificial Intelligence

For me this is a means of programming a computer with algorithms and mathematically orchestrated neural networks that attempt to replicate the processes that occur in a biological processing unit, being a brain. As humans therefore have the capability to lie, then logically it is conceivable that AI could be trained to do so too.

Now comes, the real issue, how can we tell? If an ‘Automaton’ is connected to a non-AI computer then it will read out the words and give no indication of whether or not it thinks they are truthful or not. But what if the AI was trained to consider the possibility that the data was false? Would or could it give off the subtle signals that we do that indicate that we know we are not telling the truth? Would that extra element of thought be there that we have when we are lying that produces those signs of a deception? Because to me to say a truth or false truth is a matter of explaining what I understand, lying on the other hand has the additional extra element of associated information that the communication is really false, and it is that which makes lying harder to do and cause those tell tale signals.


If a human is within visual or auditory range, be it local or via another technology then we are in a position to consider judging them on which state they are in for the information they are conveying. This can be based on trust, reputation, and so forth, any one or more factors that give somebody credibility. If an AI automation is in the same situation, then you’re in with a chance, but clearly their credibility would be harder to justify.

But what if you are reading just words on a screen presented by a computer that cannot state if the information is truly in one of the three states and will give off no signals as indications of falsehood? Those words could have been written by anyone from anywhere. It is at this point that we then rely on extra metadata, such as the username and associated personal information that can be attributed to it and presented. Now we have a connection between the words and a person, and as long as somebody else hasn’t pretended to be or hacked in as that person then we can gather additional information from other sources to bring credibility to that person. Then and only then can we believe that they are writing truths or false truths.


If a person is telling a false truth, then we need to be in a position to convince them otherwise based on what we know. Such a position is the scientific method of ‘Peer review’, where your ‘peers’ being other people have the right to question the information you present and you then need to convince them otherwise through the employment of logic or additional referential information, or indeed realise that you have made a mistake and correct it.

If a person is telling a lie, then the same applies, but because they know that they are expressing false information then it can be thought to be much harder to get an admission of guilt and have the information corrected.

If an AI automation is telling a false truth or a lie, then its up to the controlling humans to make the correction. But then the ability to do so depends on the same ways already described for a person.

Remote learning

Take the situation that you are learning remotely. That you believe the information being presented because, being new to you, you have less chance of knowing the accuracy of that information. It won’t be completely new, as learning is a process of building blocks and new information has to connect with what we already know. We would have had to been taught remotely with no human interaction for us not to have a chance of detecting false information, unless it was inconsistent.

We have to believe the new information based upon all of the supporting credibility attributed to the known author. If that supporting credibility is missing, incomplete or inconsistent then start asking questions.

But what if the information is generated by an AI or machine? What then? Could they be deceiving us without any signals indicating that the information is false? It is just these quandaries that I consider justify my belief that a computer (even with AI) is a tool to aid learning and not to replace the human.


I do think that AI could be trained to knowingly deceive, and it is up to us to be in control and know the truth.

What do you think? Please let me know in the comments.

Gareth Barnard
Latest posts by Gareth Barnard (see all)

Gareth Barnard

Gareth is a developer of numerous Moodle Themes including Essential (the most popular Moodle Theme ever), Foundation, and other plugins such as course formats, including Collapsed Topics.

One thought on “Can AI lie?

  • Fascinating !
    I’d thought about this, but not in so much depth.
    With humans I find that belief is a major factor on truth.
    e.g. someone’s political, religious, social, etc. belief can lead them to false truths, which are almost impossible to change.
    I guess AI would not have beliefs, although I suppose their initial programming and data sources are an equivalent construct for the foundation of what is correct.
    My head hurts the more I think about this! lol


Add a reply or comment...