Artificial intelligence (AI) is an advanced form of computing designed to mimic human intellect, but with specific purposes and hardware. It has the potential to be more powerful, proficient, and capable of learning than humans due to the control we have over its limitations. With proper instruction, AI can be exponentially better than humans at certain tasks. Life is exceptionally complex, with degrees of intelligence and associations between many aspects. However, an infinite number of considerations add to its overall complexity. Imagine trying to recount the entirety of your life, past, present, and future, let alone that of a nonexistent person. Our main objective for this project is to create realistic individuals using ChatGPT. We aim to design prompts/programs that optimize the AI's capabilities to generate hyper-realistic, hyper-detailed, and hyper-consistent summaries of people's lives. At the time of writing, OpenAI's GPT-3 model is the most advanced software available and conducive enough for our project. However, this is not as simple as just asking it nicely. We will need to carefully craft our prompts and continuously refine them to achieve our goal. In order to achieve optimal output, we will first need to address limitations, and detailing. This will be approached by incurring very specific pre-summaries, which will then be sourced to create even more specific mini-summaries providing even more detail on particular aspects about a person’s life.This hierarchical approach will not only allow us to procure more data about a particular subject, but also to bypass the token limit’s hindrances. Let’s start looking at the final code thus far, and break down each part, and relate to aforementioned objectives:
In order to achieve optimal output, we will first need to address limitations, and detailing. This will be approached by incurring very specific pre-summaries, which will then be sourced to create even more specific mini-summaries providing even more detail on particular aspects about a person’s life. This hierarchical approach will not only allow us to procure more data about a particular subject, but also to bypass the token limit’s hindrances. Let’s start looking at the final code thus far, and break down each part, and relate to aforementioned objectives:
const { Configuration, OpenAIApi } = require("openai");require('dotenv').config()
const configuration = new Configuration({ apiKey: process.env.OPENAI_API_KEY,});const openai = new OpenAIApi(configuration);
async function generatePerson() { const prompt = "Generate a random person and include information about his/her name,age,sex,current occupation,hobbies,nationality, and current country of residence";
const completion = await openai.createCompletion({
model: "text-davinci-003",
prompt,
max_tokens: 128, });
const g1 = completion.data.choices[0].text.trim();
const json = JSON.stringify({g1});
return {g1, json};}
async function sumTemplate(g1) {
const prompt = `Generate a summary of ${g1}'s lifetime, make sure to do a summary for each year.`;
const completion = await openai.createCompletion({
model: "text-davinci-003",
prompt,
max_tokens: 512, });
const g2 = completion.data.choices[0].text.trim();
return g2; }
async function optimismCheck(g2) {
const prompt = `"${g2}" is too optimistic, try again.`;
const completion = await openai.createCompletion({
model: "text-davinci-003",
prompt, max_tokens: 512, });
const g3 = completion.data.choices[0].text.trim();
return g3;}
async function firstExpansion(g3) {
const prompt = `I rate "${g3}" a 5/10, try and give me a 10/10 by providing more detail in each aspect of life, and discuss even more aspects.`;
const completion = await openai.createCompletion({
model: "text-davinci-003",
prompt,
max_tokens: 512, });
const g4 = completion.data.choices[0].text.trim(); return g4;}
async function D1(g4) { const prompt = `Read "${g4}" then try and provide even more detail, and make it even longer.`; const completion = await openai.createCompletion({ model: "text-davinci-003", prompt, max_tokens: 1024, }); const g5 = completion.data.choices[0].text.trim(); return g5;}
async function D2(g5) { const prompt = `Read ${g5} then write a summary of their lifetime, and make it even longer and more detailed . Include important relationships, and events`;
const completion = await openai.createCompletion({
model: "text-davinci-003",
prompt,
max_tokens: 2048, });
const g6 = completion.data.choices[0].text.trim();
return g6;}
async function simX(g1, g2, g3, g4, g5, g6) {
const miniworld = `${g1} ${g2} ${g3} ${g4} ${g5} ${g6}`;
const prompt = `Use information from ${miniworld} to simulate the first year of ${g1}'s life.`;
const completion = await openai.createCompletion({
model: "text-davinci-003",
prompt,
max_tokens: 1024, });
const output = completion.data.choices[0].text.trim();
return output;}
async function generateLifeSimulation(g1, age) {
let output = '';
for (let i = 0; i < age; i++) {
const prompt = `Use information from ${g1} to simulate the ${i + 1}${i === 0 ? 'st' : i === 1 ? 'nd' : i === 2 ? 'rd' : 'th'} year of ${g1}'s life at the age of ${i + 1}.`;
const completion = await openai.createCompletion({
model: "text-davinci-003",
prompt,
max_tokens: 1024, });
const simulation = completion.data.choices[0].text.trim();
output += `\n\nYear ${i + 1} simulation:\n\n${simulation}`; }
return output;}
async function run() { const { g1 } = await generatePerson();
const g2 = await sumTemplate(g1);
const g3 = await optimismCheck(g2);
const g4 = await firstExpansion(g3);
const g5 = await D1(g4);
const g6 = await D2(g5);
const firstYearSimulation = await simX(g1, g2, g3, g4, g5, g6);
const age = parseInt(g1.match(/\d+/)[0]);
const remainingSimulations = await generateLifeSimulation(g1, age);
console.log(g1); console.log(`Remaining year simulations:${remainingSimulations}`);
async function aDaySim(simulation, g1) {
const prompt = `Based on ${simulation} hourly simulate a each day in a week of ${g1}'s life at his/her current age.`;
const completion = await openai.createCompletion({
model: "text-davinci-003",
prompt,
max_tokens: 1024, });
const week = completion.data.choices[0].text.trim();
return week; }
aDaySim(simulation,g1)
.then(week => console.log(week))
.catch(error => console.error(error));
console.log(week);}
run();
const { Configuration, OpenAIApi } = require("openai");
require('dotenv').config()
const configuration = new Configuration({ apiKey: process.env.OPENAI_API_KEY,});
const openai = new OpenAIApi(configuration);
This section is the very stem of our hierarchical design, as it is where our imaginary person is first generated. This also serves as the general structure for any of the other methods proceeding this one. To save ourselves the trouble of explaining the same thing every method, I will do an in-depth breakdown of this first one, and highlight the differences in each of the following ones.
Going line-by-line :
“async function generatePerson() {“ is just the name of the method (generatePerson) and the empty double brackets are just where any other piece of information this method is supposed to use would be declared.Since it is the very first one, there’s no pre-existing information that should be used.
Next,
async function generatePerson() {
const prompt = "Generate a random person and include information about his/her name,age,sex,current occupation,hobbies,nationality, and current country of residence";
const completion = await openai.createCompletion({
model: "text-davinci-003",
prompt,
max_tokens: 128, });
const g1 = completion.data.choices[0].text.trim();
const json = JSON.stringify({g1});
return {g1, json};}
The next line stores the prompt which is “Generate a random person and include information about his/her name,age,sex,current occupation,hobbies,nationality, and current country of residence”, so that is what is passed onto ChatGPT. Then the following line containing “await” just tells the program to wait for a response before moving onto the next line.
Then the “prompt” lines just holds the question we will GPT-3 to respond to. The output of this is then stored as g1 and is passed onto the next stage of the sim.
The max_tokens line is the first limitation we will deal with. This just specifies the number of words/characters we should get in our response, note that a larger token limit will result in longer, but less concise generations. Having multiple methods generating different information allows us to maximise the token limits, and generate information as detailed as possible.
completion.data contains the data returned from the API call, and completion.data.choices is an array of possible completions that the API returned. Since we only requested a single completion, we can assume that the first choice in the array is the one we want..text is then used to extract the generated text from the chosen completion..trim() is used to remove any whitespace from the beginning or end of the generated text.The person variable is assigned the generated text, which represents a random person generated by the language model.The JSON.stringify() method is used to convert the person variable to a JSON string.Finally, the function returns an object containing both the person variable and its JSON representation.
async function sumTemplate(g1) {
const prompt = `Generate a summary of ${g1}'s lifetime, make sure to do a summary for each year.`;
const completion = await openai.createCompletion({
model: "text-davinci-003",
prompt,
max_tokens: 512, });
const g2 = completion.data.choices[0].text.trim();
return g2; }
This piece utilizes the g1 const from generatePerson in its own prompt which is “Generate a summary of ${g1}'s life time, make sure to do a summary for each year”.Take note of the “g1” in the brackets next to the method’s name. It calls upon the specific person generated in the prompt by referencing the product const g1 from the preceding method, taking note of the format “${g1}”. It then goes through the same processes seen in generatePerson above, and the final product is stored as const g2. The purpose of this method is to further generate more personal data about our generated individual, as well as give the API a base template to further enhance the summaries that will be generated.
This section prompts the API to generate a summary
async function optimismCheck(g2) {
const prompt = `"${g2}" is too optimistic, try again.`;
const completion = await openai.createCompletion({
model: "text-davinci-003",
prompt,
max_tokens: 512, });
const g3 = completion.data.choices[0].text.trim();
return g3;
This function is one of the first quality filters in the program, and is crucial to the realism of the final product. As you might have guessed, it tones down optimism of the generation a little bit by taking g2, and re-generating with the focus being to make it overall less optimistic before passing it on. This was a necessary addition as during development, the final products would always end up being unrealistically positive, which is unlike most lives in general. The product is stored as g3, and then passed on.
async function firstExpansion(g3) {
const prompt = `I rate "${g3}" a 5/10, try and give me a 10/10 by providing more detail in each aspect of life, and discuss even more aspects.`;
const completion = await openai.createCompletion({
model: "text-davinci-003",
prompt,
max_tokens: 512, });
const g4 = completion.data.choices[0].text.trim();
return g4;}
async function D1(g4) {
const prompt = `Read "${g4}" then try and provide even more detail, and make it even longer.`;
const completion = await openai.createCompletion({
model: "text-davinci-003",
prompt, max_tokens: 1024, });
const g5 = completion.data.choices[0].text.trim();
return g5;}
async function D2(g5) {
const prompt = `Read ${g5} then write a summary of their lifetime, and make it even longer and more detailed . Include important relationships, and events`;
const completion = await openai.createCompletion({
model: "text-davinci-003",
prompt,
max_tokens: 2048, });
const g6 = completion.data.choices[0].text.trim();
return g6;}
This section is a combination of 3 functions/methods that first expand the variety of topics discussed in the output,enhance the detail on each aspect, and then expand each year with even more detail with knowledge on important relationships, and momentous events. This all starts with g3 from section 3 being run through three different processes to achieve the aforementioned detail in the summary. The final output (g6) is an extra-detailed, lengthened summary of our individual's life, which is crucial for the next stage, because it means we can maximize the token limit in the larger simulations because it won't need to generate as much new data anymore.
async function firstExpansion(g3) { const prompt = `I rate "${g3}" a 5/10, try and give me a 10/10 by providing more detail in each aspect of life, and discuss even more aspects.`; const completion = await openai.createCompletion({ model: "text-davinci-003", prompt, max_tokens: 512, }); const g4 = completion.data.choices[0].text.trim(); return g4;}
async function D1(g4) { const prompt = `Read "${g4}" then try and provide even more detail, and make it even longer.`; const completion = await openai.createCompletion({ model: "text-davinci-003", prompt, max_tokens: 1024, }); const g5 = completion.data.choices[0].text.trim(); return g5;}
async function D2(g5) { const prompt = `Read ${g5} then write a summary of their lifetime, and make it even longer and more detailed . Include important relationships, and events`; const completion = await openai.createCompletion({ model: "text-davinci-003", prompt, max_tokens: 2048, }); const g6 = completion.data.choices[0].text.trim(); return g6;}
This code defines three `async` functions (`firstExpansion`, `D1`, and `D2`) that use the OpenAI API to generate natural language text based on a given prompt. The `firstExpansion` function takes a single parameter `g3`, which is a string representing a rating of some aspect of life. It generates a prompt asking the user to provide more detail and aspects to improve the rating from 5/10 to 10/10. It then uses the OpenAI API to generate a response to the prompt and returns it as a string.The `D1` function takes a single parameter `g4`, which is the string generated by the `firstExpansion` function. It generates a new prompt asking the user to provide even more detail and make the response even longer. It then uses the OpenAI API to generate a longer response based on the new prompt and returns it as a string.The `D2` function takes a single parameter `g5`, which is the string generated by the `D1` function. It generates a new prompt asking the user to write a summary of someone's lifetime, including important relationships and events. It then uses the OpenAI API to generate a longer and more detailed response based on the new prompt and returns it as a string.
async function simX(g1, g2, g3, g4, g5, g6) {
const miniworld = `${g1} ${g2} ${g3} ${g4} ${g5} ${g6}`;
const prompt = `Use information from ${miniworld} to simulate the first year of ${g1}'s life.Make sure to include simulations of key events and relationships.`;
const completion = await openai.createCompletion({
model: "text-davinci-003",
prompt,
max_tokens: 1024, });
const output = completion.data.choices[0].text.trim();
return output;} //
async function generateLifeSimulation(g1, age) {
let output = '';
for (let i = 0; i < age; i++) {
const prompt = `Use information from ${g1} to simulate the ${i + 1}${i === 0 ? 'st' : i === 1 ? 'nd' : i === 2 ? 'rd' : 'th'} year of ${g1}'s life at the age of ${i + 1}.`;
const completion = await openai.createCompletion({
model: "text-davinci-003",
prompt,
max_tokens: 1024, });
const simulation = completion.data.choices[0].text.trim();
output += `\n\nYear ${i + 1} simulation:\n\n${simulation}`; }
return output;}
async function run() {
const { g1 } = await generatePerson();
const g2 = await sumTemplate(g1);
const g3 = await optimismCheck(g2);
const g4 = await firstExpansion(g3);
const g5 = await D1(g4);
const g6 = await D2(g5);
const firstYearSimulation = await simX(g1, g2, g3, g4, g5, g6);
generateLifeSimulation const age = parseInt(g1.match(/\d+/)[0]);
const remainingSimulations = await generateLifeSimulation(g1, age);
console.log(g1);
console.log(`Remaining year simulations:${remainingSimulations}`);
async function aDaySim(simulation, g1) {
const prompt = `Based on ${simulation} hourly simulate a each day in a week of ${g1}'s life at his/her current age.`;
const completion = await openai.createCompletion({
model: "text-davinci-003",
prompt,
max_tokens: 1024, });
const week = completion.data.choices[0].text.trim();
return week; }
aDaySim(simulation,g1) .then(week => console.log(week))
.catch(error => console.error(error));
console.log(week);}
run();
This is the last section where the API is used to generate natural language text based on prompts, and then uses that text to simulate a person's life. The output is a series of strings representing different aspects of the person's life, including a simulation of their first year of life, their remaining years, and a week in their life at a given age. Here, simX takes six parameters (g1 through g6), which are all strings generated by other functions. It creates a new prompt asking the user to simulate the first year of g1's life using information from the other parameters, and then uses the API to generate a response to the prompt. The response is returned as a string, which is then used in generateLifeSimulation(g1, age). This function takes two parameters (g1 and age). It generates a series of prompts asking the user to simulate each year of g1's life, up to the given age generate responses to each prompt and concatenates them into a single string. The string is returned as the output of the function. aDaySim then takes simulation and g1. It generates a prompt asking the user to simulate a week in g1's life based on the information in simulation. Lastly, run(): This function is the main function that ties everything together. It calls several other functions to generate strings representing different aspects of a person's life, and then uses those strings to simulate the person's life. It first generates a person using generatePerson(), then generates g2, g3, g4, g5, and g6 using other functions. It then calls simX() to simulate the first year of the person's life, and generateLifeSimulation() to simulate the remaining years. Finally, it calls aDaySim() to simulate a week in the person's life at their current age, and logs the results to the console.
That is all for the work already done, now we'll look over what this code is further capable of. The current state of the work is only going to output detailed summaries for every year of a person's life, but if you follow the approach used source information, and refine/define perfect generations, this program is well capable of simulating up weekly summaries, or even hourly for a particular person. Take a look at the aDaySim function in the code, whilst rendered nonfunctioning at this stage, I have left it in there to provide a starting point for those might wish to edit the program to further output more recent generations about their individual. All that would be required is for you to create methods that lead up to the days themselves, if at the moment we are working with years, you would just have to create method-prompts that would go down into months in particular years,weeks in particular months, days in particular weeks, and etcetera, all the code in Section 5 makes this possible by giving the token limit room solely work on simulating the timely data. While it is possible to just skip to weekly generations from yearly, doing so would only result in poorer simulations as you would be asking the API to generate a weekly summary based on it's knowledge of yearly behavior, whereas, if you asked it do weekly based on monthly knowledge.I believe that by following this method of generating, storing and passing from macro to micro information, the possibilities are endless, and maybe the runtimes as well.
For any further enquiries, please contact me at: oyusuf01@qub.ac.uk.