Skip to main content

Announcing newest GenAI course: Developer's guide to LLM prompting

We're pleased to offer a new, free course on LLM prompting that covers everything from prompt anatomy to advanced techniques. We'd love it if you gave it a try. This is a translated version of the article. Feel free to report any possible mis-translations in the comments section
Created on August 26|Last edited on August 26
We're excited to announce our latest course, "Developer's Guide to LLM Prompting," designed to equip you with the knowledge and practical skills needed to implement and take advantage of LLMs in your applications.
REGISTER FOR FREE


Led by Anish Shah (ML Engineer at Weights & Biases) and Teodora Danilovic (Prompt Engineer at AutogenAI), this course takes you on a journey from basic prompting techniques to advanced strategies. You'll learn how to craft effective prompts, use system messages and user inputs, add your own context documents to the LLM, and follow a real industry case study with AutogenAI to hear about the prompting considerations you should consider when creating applications used by thousands of users all over the world.
Whether you're new to LLMs or have already started implementing them in your code, this course will strengthen your LLM prompting foundations.


Prompting techniques covered

In our newest course, you'll start with the basics of the prompt anatomy. You'll learn how to create a good system message, how it ties with the user input, and how to get the desired output. Anish will not only teach you theory but also walk thorough the code (which is available in the course repo).
Once you are familiar with how to format your prompts, inputs, and outputs, you'll move on to the basic prompting techniques like zero-shot, few-shot and chain-of-thought prompting.

These are the most popular prompting techniques and they are enough for most simple apps developed with LLMs. If you are looking to solve a more complicated use case, we cover that too. Anish dives into the advanced prompt techniques, explaining what are they and common use cases where you might use them. He covers:
  • Reasoning enhancement techniques
  • Structural prompting techniques
  • Optimization techniques
  • Interaction techniques

Case study

Each LLM-powered process is unique so the prompting approach and results will differ. As you would assume, the applications solving the same issue used by ten users in UK will have a very different prompt comparing to an application used by ten thousand users all over Asia. There are many new considerations when scaling your LLM solution and in the course, Teodora Danilovic, Prompt Engineer from AutogenAI, will share their experience about how a prompt engineer's job differs when you are working on multi-national LLM-powered solutions.

In conclusion, this course will equip you with the knowledge and skills to get started with LLMs and understand the mindset and considerations needed to make them a successful project.
ENROLL NOW