If you've tried to use Chat GPT to write a blog post or do anything more than a simple Q and A, you've noticed that it's really sensitive to prompts. Large language models are prone to hallucination and jailbreaks generally but simple changes in the way you prompt them usually prompt significantly different results. Studies have shown that "engineering" a prompt specific to your use case can increase the accuracy of Chat GPT's response by up to 88%.
So should you be skilling up in prompt engineering in 2024? Let me answer that by telling you something that happened to me recently.
I was playing a trivia game at home with my kids and the question "What is the capitol of Estonia?" came up. I couldn't remember and it annoyed the hell out of me. I know the answer. I remember taking the ferry across from Helsinki once and getting my hair cut during the trip. I remember buying the legal limit of duty free booze and taking it back to Finland. But I couldn't answer.
The next morning I saw a picture of a bear (actually in a computer vision model I was working on) and I remembered Karhu (Finnish for bear) which was the brand of the beer we bought in Tallinn. The capitol of Estonia is Tallinn. Perkele. (Damn it).
What happened? I knew the answer, but I couldn't get it. Was the bear pic simply a prompt I needed?
This is not completely orthogonal to the way large language models work. Describing LLMs as simply predicting the next word in a sequence feels kind of reductive (ChatGPT generates astonishingly high-quality text) but it's not completely wrong either. At their core, LLMs are taking streams of text as input and predicting output called completions. Prompt engineering directs these streams of text and hence the predictions. (I'm really simplifying here but this is not wrong.)
Learning the basics of prompt engineering will help you get significantly better results from your generative AI applications. There's an excellent free guide from OpenAI here. If you have anything to do with planning the way your organization will use gen AI then learning the basics of how neural networks and large language models work will serve you well too.
Do we think Prompt Engineer will become a job title in 2024? It might but, you know, aim higher dude. In any case you'll get much better responses if you know how your gen AI app works under the hood.