Most of the time our children ask to tell the story at bedtime. So we want to build a fabulous moral story generator where someone helps us to write a nice story with a moral value. Since most of us do not have wisdom like Aesop we will take the help from openai. 🙂
In this article, we will explore basic prompt engineering and learn a few good practices. We will use vanilla javascript so anyone who has basic javascript knowledge can also follow the process.
First, let’s see how we can fetch the API using javascript.
const apiKey = “YOUR API KEY” |
In the fetch body, we send just two property models and a prompt.
Model
The model is the most important property. So what is the model? A model is an algorithm that uses training data to recognize patterns and by this pattern can make predictions or decisions. There are different models so you need to choose based on the requirement and adjust with your budget because most advanced models are the most expensive. You can use the website, https://gpttools.com/comparisontool to compare different models
The API will return the below JSON object. The answer to the prompt is in the choices objects text property.
{ |
This is a very basic API call, we can refactor and make it more readable and manageable using the Openai package.
// to install the package run the below command |
Let’s go forward. Now we prompt using the moral and we want Openai API will return a small story like Aesop’s fable. Here is our first prompt.
import { process } from ‘/env’ |
Now look at the API response data object below. It returns an incomplete response. Also, check the finish_reason property, its value is length. Value length is bad and value stop is good. So why it happens? Because if we don’t provide the max_token property in the API call, openai API will use its default max_token value, which is 16. Now what is a token? Check in the next section.
{ |
Token Each token is a piece of words. One token is approximately 4 characters or 0.75 words. So we can think of it as one token is 75% of words which means, 100 tokens is approximately 75 words. If you don’t specify the max_token property in the request body it will use default value which is 16. Also if you don’t allow enough tokens, your completion will be cut short.
Tokens are also important because each token incurs a charge and takes processing time. So you should limit the number of tokens to keep costs down and performance up.
Now let’s set the max token property in our request body and try again.
import { process } from ‘/env’ max_token: 700 |
I got the below story which is fantastic 🙂.
The little girl hummed a tune as she twirled her skirt and ran in the park. She ran to the river where the tide was starting to come in, filling the sand around her feet with water. She paused, watching the river, and then smiled as the waves lapped around her toes.
Suddenly, a voice caught her attention. She looked up and saw an old man leaning on a stick. He looked kindly at her and said, “Time and tide wait for none.” She looked around her, and saw the water rising quickly around her feet. Her smile faded as she realised that she had been so lost in her play that she had failed to notice the tide coming in.
The old man smiled down at her and said, “You must learn to use your time wisely, for time and tide wait for none.”
The little girl thought about the old man’s words and realized the wisdom behind them. She learnt an important lesson that day, a moral that she would always remember: time and tide wait for none.
No matter how much we wish it, no amount of wishing or wanting can change the fact that time and opportunity do not wait for us. We must seize every possible moment to make the most out of our lives.
Now we will try one more and this time we will give some examples to the Openai so its response will be more relevant. Check the below request, in between the triple hash we describe the example and below it, we set the instructions.
const response = await openai.createCompletion({ |
This time I got the below response which is awesome.
Once upon a time, there lived an old man and his wife by the banks of a river. They were very wise and hardworking people, and they lived a very content life together.
One day, the old man decided to go fishing in the river, and he chose his favorite spot. He was so engrossed in fishing that he didn’t notice the water around him getting deeper and deeper. Before he knew it, the tide had come in and he was dangerously close to being swept away.
Just then, the old man’s wife realized what was happening and she quickly ran to his rescue. She shouted and waved frantically at her husband, but no matter how hard she tried to get his attention, he was so enraptured in his fishing that he couldn’t hear her.
Finally, the old woman was able to get him to look up, and he realized he was in danger. Without wasting any time, the two of them quickly made it out of the river before it was too late. The old couple embraced each other tightly, and the old man thanked his wife for saving him, saying “Time and tide wait for no man”. They had learned their lesson: you need to stay attentive and prepared in order for you to take advantage of opportunities that come your way.
If you set another example the result might be more relevant. There is no hard and exact process for prompt engineering. You need to adjust your prompt several times and follow some best practices and by doing this you may get a better result. In our next article, we will explore other best practices.
0 Comments