AI - OpenAI ChatGPT
-
Introduction
ChatGPT is an AI model built by the company OpenAI. It can chat with us like a human. People use ChatGPT for many tasks, such as asking for information or writing new articles.
In this article, you will learn how ChatGPT works and how you can use it in your projects on the CreatiCode playground.
How does ChatGPT work - predicting the next word
Although ChatGPT can simulate human conversation, it does not function in the same way as human brains.
In fact, ChatGPT does not truly understand human language. Instead, it repeatedly guesses the next word to say based on the conversation. For example, when we say “United States of”, it will guess that the next word is most likely “America”. That’s why we often say ChatGPT is a “completion” model, because it attempts to complete a sentence in a way that makes sense.
Although this method seems very simple, it works remarkably well most of the time, because ChatGPT has read a vast amount of books, articles, and websites; therefore, it can find a very reasonable next word and keep doing so.
ChatGPT can give wrong answers
Although ChatGPT works well in many cases, it can still make silly mistakes. This problem is known as “hallucination.” This is primarily due to 2 reasons:- Wrong Training Data: We often say “garbage in, garbage out”. If the data it reads is incorrect or not up to date, then it will provide wrong answers.
- No Correct Answer: Sometimes, there is no correct answer for some questions, or ChatGPT has not seen the correct answer in its training data. In such cases, ChatGPT may attempt to generate a plausible answer. For example, as of March 2023, it would give a wrong answer like this (later versions of ChatGPT have fixed this issue):
As a programmer, you need to help avoid or fix such mistakes. For example, if you provide ChatGPT with more accurate or up-to-date facts related to the question, it will be able to give a more reliable response.
Write Accurate and Detailed Prompts
The answers we receive from ChatGPT largely depend on the request or question we submit to it. This is called a “prompt”.When we search on Google, the keywords we type in determine which websites are returned by Google. The more accurate our keywords, the more likely we are to get the websites we need.
Similarly, the quality of the prompts we provide to ChatGPT determines the quality of the responses we receive from it. The more accurate and detailed they are, the more likely we are to get the response we are looking for.
For example, here is the response we get when we use a general prompt:
And this is what we get when we provide much more details in our prompt:
Role-Playing Prompts
Another great way to improve the quality of the prompt is “role-playing”. We can tell ChatGPT to play a particular role, such as a “teacher” or a “doctor”, and this often leads to much better results. It would often help if we told it who we are as well.For example, without role-playing, we can get a fairly complex answer:
However, if we tell ChatGPT to pretend to be a teacher, we can get an answer that’s easier to understand:
The ChatGPT Request Block
There are several ways for us to incorporate ChatGPT into our projects. For example, in game projects, we can allow players to chat with non-player characters for entertainment or information. If the project is a tool, we can offer a chatbot to answer user questions.To send a request to the OpenAI ChatGPT server and get the response back, you can use the following block in the “AI” category.
It accepts the following input parameters:-
Request: The first input is the prompt you are sending to ChatGPT. Hint: To input prompts with multiple lines, use the SHIFT + ENTER key to start a new line.
-
Result: You need to select a variable from this dropdown. You can use an existing variable or create a new variable. The response from ChatGPT will be stored in this variable.
-
Mode: This dropdown controls whether this block will wait for all the entire response to come back before continuing to the next block.
- Selecting the “waiting” option causes the program to pause at this block until the complete response from ChatGPT is received. Once the entire response is fully received and stored in the designated variable, your program resumes execution. This mode is similar to the behavior of Scratch’s standard “broadcast [MESSAGE] and wait” block. For example, this program makes the dog say the response after waiting:
- When set to the “streaming” option, the program immediately moves on to the next block without waiting for ChatGPT’s full response. Initially, the result variable will be empty after this block runs. However, as ChatGPT begins responding, this variable is continuously updated in real-time, progressively displaying partial responses. This mode allows us to show partial response to the users as soon as possible, so it is more suitable if the response is long. For example, this program will repeatedly update what the dog says using the response:
To find out when we have received the entire response from ChatGPT, you can look for the “check mark emoji”at the end (you can copy that emoji here). If the result variable contains this symbol, it means we have reached the end of the response. Note that this check mark emoji is not appended to the variable in the “waiting” mode.
- Selecting the “waiting” option causes the program to pause at this block until the complete response from ChatGPT is received. Once the entire response is fully received and stored in the designated variable, your program resumes execution. This mode is similar to the behavior of Scratch’s standard “broadcast [MESSAGE] and wait” block. For example, this program makes the dog say the response after waiting:
-
Length: This is the maximum number of tokens for the response. You can think of a token as part of a word. For example, “strange” is composed of 2 tokens: “str” and “ange”. ChatGPT’s response will be cut off if it exceeds this limit. To avoid this, you can encourage ChatGPT to use fewer words by explicitly specifying this in your prompt, such as “answer within 50 words” or “give me a concise answer.”
-
Temperature: This is a number between 0 and 1. A higher value will make the response more random and creative; a lower value will make the response deterministic and predictable.
-
Session: This dropdown controls whether we are continuing the previous conversation or starting a new chat. If it is a new chat, ChatGPT will not remember anything from the previous messages (the “context”).
- For example, if we try to ask 2 questions that are related using “new chats”, ChatGPT does not remember the first question or answer when it answers the second question:
- However, if we change the second request to “continue” the previous conversation, ChatGPT will “remember” we are talking about the square of numbers from the conversation history:
- Note that whenever the green flag button is clicked, the chat history is cleared, so even if you specify “continue”, ChatGPT will still start a fresh new chat.
- For example, if we try to ask 2 questions that are related using “new chats”, ChatGPT does not remember the first question or answer when it answers the second question:
You can not “continue” the session for too long
You might consider using the ChatGPT block in the following way: start a session with the “new chat” option on the first user message, then use the “continue” mode thereafter. There is one problem with this approach. The ChatGPT service has a limit on the number of words it can receive and send back, which is approximately 5,000.When you use the “continue” session type for a new message from the user, all the previous messages are sent back by the CreatiCode server on your behalf to ChatGPT along with this new message. As a result, the conversation continues, and the chat history grows larger and larger. So, when the entire conversation reaches approximately 5,000 words, the chat must restart as a “new chat” type, and all chat history will be lost. You will notice this when the response ends with the special phrase “TOKEN LIMIT REACHED”.
System Requests
We can also send another type of request to ChatGPT called “system requests”. ChatGPT will treat such requests as coming from the programmer rather than the end user, and it will do its best to fulfill them. You can use the following block to send a system request:The meanings of the 4 input parameters are the same as the normal request block, so the only difference is that the system request is treated as higher priority.
Since the system requests are very powerful, you should not send user messages as system requests. Instead, only use this block for setting up the conversation and give ChatGPT high-priority rules to follow. For example, if you want ChatGPT to play a specific role regardless of the user’s input, you should specify this role using a system request.
Cancel a ChatGPT Request
Sometimes the user may want to cancel an ongoing request. For example, the chatbot may take too long to write the response, or the response may not be what the user is looking for. Additionally, the user cannot send another request until ChatGPT finishes responding to the current request.To cancel the request, you can run the following block:
Safety for Use in Schools
Unlike the publicly available ChatGPT API from OpenAI, it is perfectly safe for school-age students to use the block in the CreatiCode playground. A strong content moderation process is applied to guarantee that only appropriate requests are sent to ChatGPT, and only appropriate responses are returned.
For example, if the user request contains inappropriate content, it will be rejected with a response like this:
The chat messages will not be shared with any third party. However, a teacher can review the chat messages of students in their classes.
Rate Limit
The ChatGPT block is publicly available to all users in the CreatiCode playground. However, during periods of high demand, the block’s usage may be subject to a rate limit.