THE UNITED NATIONS' RESEARCH INITIATIVE- TDR GLOBAL HAS AWARDED RECOGNITION TO KONNIFEL'S RESEARCH INTERNSHIP PROGRAMME AND MENTORSHIP PROGRAMME

TDR Logo

How I Won Singapore's GPT-4 Prompt Engineering Competition

ACADEMIC RESEARCH BASED
Last month, I had the incredible honor of winning Singapore's first ever GPT-4 Prompt Engineering competition, which brought together over 400 prompt-ly brilliant participants, organised by the Government Technology Agency of Singapore (GovTech). Prompt engineering is a discipline that blends both art and science - it is as much technical understanding as it is of creativity and strategic thinking. This is a compilation of the prompt engineering strategies I learned along the way, that push any LLM to do exactly what you need and more!

This article covers the following, with referring to beginner-friendly prompting techniques while refers to advanced strategies:

Structuring prompts using the CO-STAR framework

Sectioning prompts using delimiters

Analyzing datasets using only LLMs, without plugins or code-With a hands-on example of analyzing a real-world Kaggle dataset using GPT-4

In writing this, I sought to steer away from the traditional prompt engineering techniques that have already been extensively discussed and documented online. Instead, my aim is to bring fresh insights that I learned through experimentation, and a different, personal take in understanding and approaching certain techniques. I hope you'll enjoy reading this piece!

Author's Note:

Within the larger context of promptification, a new field is emerging: prompt engineering. Prompt engineering focuses on the development of high-quality prompts that are specifically designed to work with AI models and enable them to learn and generate output more effectively. The goal of prompt engineering is to create prompts that are maximally informative and helpful to the AI model while minimizing noise and redundancy. By doing so, prompt engineering can improve the performance of AI models in a variety of tasks, from language modeling to image generation. One of the key challenges in prompt engineering is creating prompts that are both diverse and informative. This is particularly important in the context of language models, where the model must be able to generate a wide variety of outputs based on a given prompt. At the same time, the prompt must provide enough information to guide the model in the right direction.

To overcome this challenge, prompt engineers are using a variety of techniques, including semantic clustering, pattern recognition, and machine learning. By analyzing large datasets of prompts and their corresponding outputs, prompt engineers can identify patterns and relationships that can be used to guide the development of new prompts. The field of prompt engineering is still in its early stages, but it has the potential to make a significant impact on the world of AI and human prompting. By creating high-quality prompts that are specifically designed to work with AI models, we can unlock the full potential of these powerful tools and create a better, more productive, and more creative future. As the field of prompt engineering continues to evolve, we can expect to see new techniques and tools emerging that will help us create even better prompts and achieve even more impressive results. The future of promptification is bright, and prompt engineering is at the forefront of this exciting and rapidly evolving field.