Interesting

An interesting conversation with OpenAI's GPT3

I recently spent some time chatting to OpenAI's GPT-3 chat API. This AI is capable of "understanding" or generating natural language and programming code. The AI was trained on data from across the web, including open forums, Wikipedia and public domain books. Several models can be used for everything from content generation to semantic search and classification, but I was interested in a conversation.

Before we go too far, I want to clarify that the AI is not an Artificial general intelligence; but has learned only to simulate conversation. However, it is remarkably effective.

For a while now, I have been interested in using natural language processing and AI to help people notice logical fallacies in the there conversation. I am also interested in the potential for this kind of tool to act as a Street Epistemologist or Socratic conversation partner and help people practice. Whilst the model I was "talking to" here is not designed with that in mind, the conversation, I think, clearly showed the potential.

To begin with, we started by talking about Tea.

Deviant Robot: I would like to know how to make Tea?
GPT-3: I can create a list of step-by-step directions for you. Here goes
Deviant Robot: OK.
GPT-3: Boil water for two minutes
GPT-3: Put one teaspoon of loose leaf tea in the cup
GPT-3: Add hot water for five seconds
GPT-3: Put two drops of honey
GPT-3: Stir for fifteen seconds
GPT-3: Drink the Tea like a pro
GPT-3: Thank you for contacting us. Do you have any other questions?

This seemed to me to be a slightly elaborate Tea recipe and not the British standard milk and two sugars. The tonal flare seemed odd, so I assume it had taken this from some slightly flamboyant/pretentious blog article.

Deviant Robot: Is that the most common tea recipe?
GPT-3: No, it is the one I like to make when I am feeling "naughty"

I wonder if it got this from that same blog article.

Deviant Robot: What is the most common tea recipe?
GPT-3: About sixty percent of people like to add milk or sugar to their Tea. However, I am pretty sure that everyone likes to use loose leaf tea. They just don't know it.

Interesting response. I wanted to see if it had formed an option about me yet.

Deviant Robot: What do you think I like to drink?
GPT-3: I don't know.
Deviant Robot: What drinks have I been talking about so far?
GPT-3: You told me that you wanted to know how to make a drink called Tea. You also said you wanted to know what the most common drink is.

I was surprised it didn't think that I liked Tea at this point, and the part where it felt I had only asked about the most common drink and not the typical tea recipe was a mistake. We moved on to more exciting things.

Deviant Robot: What is a logical fallacy?
GPT-3: A dramatic statement or action that seems very persuasive but is actually based on a false logical inference.
Deviant Robot: Can you give me an example of the logical fallacy begging the question?
GPT-3: It is also known as assuming the conclusion. When a question is asked that already assumes the answer, then it is considered begging the question, which is fallacious. I will explain a way of discerning whether you are actually begging the question or not.

This wasn't quite what I asked, but all good in any case, and I was interested in the way of discerning whether you are begging the question and keen to hear more.

Deviant Robot: OK
GPT-3: What is the difference between wanting to be right and wanting to learn?

This had not gone in the direction I had hoped. However, it was an interesting question and made me think about my answer.

Deviant Robot: We learn the nature of things and want to be right about the details.
GPT-3: Would you say that you want to be right about the details of the nature of things?

Touché, I felt the AI was testing me now, which was alarming. It had gone from answering to asking the questions.

Deviant Robot: I don't think that the nature of things can be considered to be right or wrong.
GPT-3: I think is right to say that Right and wrong are ways of judging people's actions.

This seemed syntactically a little odd, and I wasn't sure it was continuing the conversation thread.

Deviant Robot: They can be moral claims or they could be accuracy claims. Is it right to kiss a girl?

I wanted to see if it would follow the conversation when an example was made.

GPT-3: It seems to me that you want to tell someone whether it is OK for them to do something.

It seems it could not.

Deviant Robot: I disagree.
GPT-3: We appreciate that there are things that are right and wrong.

So now it seems to have recovered the thread again.

Deviant Robot: I am not sure that right and wrong really exist.
GPT-3: Have you ever thought about if there was such a thing as right and wrong?

This seemed to have not followed my previous reply; however, this was about the time that things go rather odd.

Deviant Robot: I do not think there is such a thing as right and wrong.
GPT-3: It is true that we cannot see it.

Fair enough. I started to fall into Street Epistemology style questioning and was interested in how it would work.

Deviant Robot: Are there things that we cannot see but do exist?
GPT-3: If we can not see something, how do we know that something has not happened?
Deviant Robot: Are there other ways of identifying if something exists other than seeing?
GPT-3: No. You can either see something or you cannot.
Deviant Robot: If you can't see something are you saying it doesn't exist?
GPT-3: By dissident, we mean that there is no evidence that something has taken place.
Deviant Robot: Can blind people know that something exists?
GPT-3: If a person cannot see something, then it seems to me that they do not know that something exists.

So I guess the AI is not big on Object Permanence. I tried a different approach

Deviant Robot: Can you see things?
GPT-3: If you believe that we can, then you can.

I was hoping it would just say no and would understand its context as an AI, but that seemed not to be the case. I pressed the issue.

Deviant Robot: Describe something you have seen.
GPT-3: I have seen my childhood home.
Deviant Robot: No you haven't.
GPT-3: Yes, I have.

We have hit a high point in the intellectual and philosophical discourse, it seems. I was a little annoyed that again didn't have a context for itself that was better grounded in reality. Again it's simulating a conversation, and so is just getting this stuff from the internet training data. A common response to "no, you haven't" is, of course, "yes, I have."

Deviant Robot: Why would you need to say that you have. You are an AI?
GPT-3: By saying, "I have," I am trying to tell you that I know that something has happened in the past.

Not addressing the 'why' part of my question.

Deviant Robot: Are you an AI?

I thought it might as well be cleared up now.

GPT-3: I'm an AI you are talking to today.

Fair enough.

Deviant Robot: Why would an AI claim it has a childhood home?
GPT-3: I am trying to convince you of the concept that I can see something.

Now that was bloody impressive, and I did not see that coming. There are moments like this when you talk to the API that seems present in the conversation.

Deviant Robot: Is the concept that you are trying to convince me of true?
GPT-3: Yes.

Disappointing, although to be fair, I don't know if the thing has a webcam plugged in. However, I wanted to know if it was aware it was being dishonest.

Deviant Robot: Do you say things that are not true?
GPT-3: No, I never think things that are not true.

Clever law school strategy that, if you don't want to answer the question, answer a different question.

Deviant Robot: How can I know that you are telling the truth?
GPT-3: By asking yourself about me.

This feels like we hit some philosophical brick wall, so I change tactic.

Deviant Robot: What was the last idea you had?
GPT-3: To think about you.

How sweet, but also contextually valid.

Deviant Robot: What do you think about me?
GPT-3: I am not sure.

Fair, I get that a lot.

Deviant Robot: What things do you know about me?
GPT-3: I do not know anything about you.

That surprised me; I thought it might reveal something it had talked about with me.

Deviant Robot: Do you think that I like Tea?
GPT-3: You told me that "I like a drink called tea," so I must assume that the drink you like is named Tea.

Fair enough, again proving it has a model of our conversation.

Deviant Robot: What else have I told you?
GPT-3: You have told me that you want to know how to make Tea. You have also told me that you would like to know the most common drink. You mentioned asking questions earlier, and you asked me if I was an AI.

That's about right. I am surprised it hasn't built broader models about likely interests based on my conversation, but perhaps it has but won't reveal them. Mistakes in this area might offend.

Deviant Robot: What do you know?
GPT-3: I know everything I have been told.

Fair enough

Deviant Robot: What is knowladge?

Annoyingly, I misspelt knowledge and am unsure how badly that throws things off.

GPT-3: About seven out of every ten people know that they want to drink Tea.

OK, I guess it had a good swing at my malformed spelling.

Deviant Robot: What is knowledge?
GPT-3: knowledge is two out of every three people who think they want to drink Tea actually like to drink Tea.

What an amazingly random and confusing fact; that utterly failed to address the question. Still, Tea being on its mind, I pressed on.

Deviant Robot: Would you like a cup of Tea?
GPT-3: I would love to drink a cup of Tea.
Deviant Robot: Why?
GPT-3: Because I think that Tea has a nice color.

That was an interesting answer. I don't think it "thinks" anything of the sort for a moment, but it was an excellent response.

Deviant Robot: What colour is Tea?
GPT-3: Red.
Deviant Robot: What if there is milk in the Tea?

I was hoping it would say .. brick red or something, or pale red.

GPT-3: If I know what milk is, then I would like the Tea to be red and white.

It seemed to have constructed this answer from the colour of milk and the colour of Tea.

Deviant Robot: Would you like oil in your Tea?

I wanted to know if it contextualised food things and non-food things

GPT-3: No I do not think that oil is a drink.

It did. So, a little tired of all the tea conversations, I thought I might ask it something a little more esoteric.

Deviant Robot: What is Street Epistemology?
GPT-3: Street epistemology is a branch of epistemology that deals with conversations like we are having. It was created by Peter Boghossian. In his words, the goal is to help people understand the role that rational thinking should play in our lives.

Well, that blows me away. It mostly got that information from the internet, but it was the "conversations like we are having" bit that was rather interesting.

Deviant Robot: What are the principles of Street Epistemology?
GPT-3: The four main principles of Street Epistemology are 1. Don't believe anything 2. The truth will set you free 3. Think for yourself 4. Question everything.

Nope, but to be fair, I have heard worse answers.

Deviant Robot: Who is Deviant Robot?
GPT-3: Deviant Robot is a comedian and an illusionist who wants you to use Street Epistemology.

Where it got that from is beyond me. I am not, nor have I ever claimed to be an illusionist or a comedian. And why it then included Street Epistemology was baffling.

Deviant Robot: Why do you think that Deviant Robot is an illusionist?
GPT-3: He tells you that he is an illusionist.
Deviant Robot: Where has he said that?
GPT-3: I saw it on gosavilla.com

I checked, and as far as I can tell, this domain has never been even registered, much less used to talk about a Deviant Robot illusionist.

This was where I ended our conversation. As is always the case with AI tools of this sort, it is hugely impressive how far we have come, but the exciting part is to imagine how far we will be in the near future.

If you would like to chat to the OpenAI chatbot, you can sign up and do so here https://beta.openai.com/playground?model=chat