By Eric Mersch
With artificial intelligence so in the news cycle nowadays, as a CFO I wanted to understand how I could better use AI for applications germane to the finance function (and in general).
Recently, I took a prompt engineering course offered by OpenAI’s ChatGPT. Prompt engineering is the process of guiding generative artificial intelligence (generative AI) solutions via your prompts to generate your desired outputs. Even though generative AI attempts to mimic humans, AI technology requires detailed instructions before creating high-quality and relevant outputs.
About ChatGPT
The course I chose was specific to the ChatGPT AI tool. ChatGPT is the name of the commercial product OpenAI developed using its proprietary GPT technology. GPT (Generative Pre-Trained Transformer) refers to the type of Large Language Model (LLM) and the specific technology and processes used to produce the human-like conversion. The GPT software is pre-trained by consuming a large amount of data, allowing the software to identify language patterns. ChatGPT was trained on 570 GB of data.[1] GPT is currently the most advanced LLM in commercial use today.
Some other key Artificial Intelligence technology definitions:
Generative vs. Traditional AI
Generative AI refers to a type of artificial intelligence technology capable of learning from diverse datasets and producing human-like text that is contextually relevant and coherent. Generative AI differs from Traditional AI in that the latter is trained on a single data type to produce output for a specific task.
Transformer Software
Transformer software trains the LLM to understand the context of language terms. For example, when a user writes “station,” the Transformer allows the LLM to know that the user prompt relates to “radio station” instead of “train station” based upon earlier words in the prompt and the contextual interpretation of the data ingested.
Writing the Right AI Prompts to Generate More Relevant and Useful Outputs
I quickly learned to adapt specific patterns in writing my prompts to generate better AI outputs.
Persona Prompt
The persona prompt guides the LLM to produce output from the perspective of the selected persona. For example, the CFO may want to get output that a fellow CFO would produce, from the perspective of a business intelligence analyst, from the point of view of the sales operations team, or questions that the audit committee would ask. The list of personas is endless. You can even input a persona defined by the Myers Briggs Test by asking the LLM to respond as Introverted, Intuitive, Thinking, and Judging, i.e., “INFP” or Extroverted, Sensing, Thinking, Judging, i.e., “ESTJ.” By selecting a specific persona, prompt engineers can create a more human-like response, tailor the style and tone for easier consumption by the readers, provide the context to ensure the content aligns with the intended message or purpose, and consistently report key performance metrics.
To provide a persona prompt example that CFOs might use, I used the LLM to prepare a public company (CrowdStrike’s) CEO and CFO for a quarterly earnings call.
I will first describe the data to be input. Then, I ask the LLM to act as an equity analyst in reviewing the data and to produce three possible questions that an equity analyst might ask after hearing the earnings call.
While this is a simple example, a skilled CFO using prompt engineering could further improve their results by adding more information; for example, asking for precise questions that the equity analyst who covers the company might ask other companies during their earnings calls.
After providing the earnings call transcript for CrowdStrike’s earnings call, I ask the LLM to respond as an equity analyst and provide three possible questions for the management team.
The LLM’s prompt is shown in the screenshot below (I am using the ChatGPT 3.5 mobile version). I was even mildly shocked by the accuracy and detail of the questions.
Few-Shot Prompt
In addition to the persona prompt, the few-shot prompt can be used to teach the LLM to identify patterns. For example, I taught the LLM to identify actions in response to forecasting business performance using a <situation>, <think>, <action>, format. The LLM made a recommendation that matched the result I was seeking. Once I create a prompt, I can use it like application software to elicit recommendations for other similar situations.
Meta-Language Creation Prompt
With the meta-language creation prompt, I provide a set of guidelines for the LLM to use in generating a resulting output. The prompt takes the format, “When I say ‘this,’ I mean ‘this. ’ One prompt I created instructed the LLM as such: “When I write ‘variations (something>),’ I mean give me ten different variations.
Here was my series of prompts and their corresponding AI results around revenue forecasts, for example:
Summary
I included the above prompt categories and examples to help you understand the complexity of AI prompts (at least when using ChatGPT) and the importance of understanding each method. These basic prompt techniques demonstrate that skills are required to maximize the quality of LLM output. I recommend that professionals beginning to use AI to apply to problem-solving situations should take a course in prompt engineering. You can then perfect your AI skillset through iteration and practice so that your ChatGPT outputs are optimized.
[1] Business Insider, “Google researchers say they got OpenAI’s ChatGPT to reveal some of its training data with just one word”, Beatrice Nolan, Dec. 4, 2023.
NOTE: These insights were from a seven-week Prompt Engineering course for ChatGPT, a Coursera class from Vanderbilt University Associate Professor Jules White.