Created by Creator utilizing Midjourney
Â
I will not begin this with an introduction to immediate engineering, and even discuss how immediate engineering is “AI’s hottest new job” or no matter. You realize what immediate engineering is, otherwise you would not be right here. You realize the dialogue factors about its long run feasibility and whether or not or not it is a respectable job title. Or no matter.
Even figuring out all that, you might be right here as a result of immediate engineering pursuits you. Intrigues you. Perhaps even fascinates you?
In case you have already discovered the fundamentals of immediate engineering, and have had a take a look at course choices to take your prompting recreation to the following stage, it is time to transfer on to a number of the more moderen prompt-related sources on the market. So right here you go: listed below are 3 latest immediate engineering sources that will help you take your prompting recreation to the following stage.
Â
1. The Excellent Immediate: A Immediate Engineering Cheat Sheet
Â
Are you in search of a one-stop store for all your quick-reference immediate engineering wants? Look no additional than The Immediate Engineering Cheat Sheet.
Â
Whether or not you’re a seasoned person or simply beginning your AI journey, this cheat sheet ought to function a pocket dictionary for a lot of areas of communication with massive language fashions.
Â
This can be a very prolonged and really detailed useful resource, and I tip my hat to Maximilian Vogel and The Generator for placing it collectively and making it obtainable. From primary prompting to RAG and past, this cheat sheet covers an terrible lot of floor and leaves little or no to the newbie immediate engineer’s creativeness.
Matters you’ll examine embody:
- The AUTOMAT and the CO-STAR prompting frameworks
- Output format definition
- Few-shot studying
- Chain-of-thought prompting
- Immediate templates
- Retrieval Augmented Technology (RAG)
- Formatting and delimiters
- The multi-prompt method
Â
Instance of the AUTOMAT prompting framework (supply)
Â
Here is a direct hyperlink to the PDF model.
Â
2. Gemini for Google Workspace Immediate Information
Â
The Gemini for Google Workspace Immediate Information, “a quick-start handbook for effective prompts,” got here out of Google Cloud Subsequent in early April.
Â
This information explores other ways to shortly leap in and acquire mastery of the fundamentals that will help you accomplish your day-to-day duties. Discover foundational abilities for writing efficient prompts organized by position and use case. Whereas the chances are just about limitless, there are constant greatest practices which you could put to make use of at present — dive in!
Â
Google desires you to “work smarter, not harder,” and Gemini is a giant a part of that plan. Whereas designed particularly with Gemini in thoughts, a lot of the content material is extra typically relevant, so do not draw back should you aren’t deep into the Google Workspace world. The information is doubly apt should you do occur to be a Google Workspace fanatic, so positively add it to your listing if that’s the case.
Test it out for your self right here.
Â
3. LLMLingua: LLM Immediate Compression Software
Â
And now for one thing a bit completely different.
A latest paper from Microsoft (effectively, pretty latest) titled “LongLLMLingua: Accelerating and Enhancing LLMs in Long Context Scenarios via Prompt Compression” launched an method to immediate compression so as to scale back value and latency whereas sustaining response high quality.
Â
Immediate compression instance with LLMLingua-2 (supply)
Â
You’ll be able to take a look at the ensuing Python library to strive the compression scheme for your self.
Â
LLMLingua makes use of a compact, well-trained language mannequin (e.g., GPT2-small, LLaMA-7B) to determine and take away non-essential tokens in prompts. This method allows environment friendly inference with massive language fashions (LLMs), attaining as much as 20x compression with minimal efficiency loss.
Â
Beneath is an instance of utilizing LLMLingua for simple immediate compression (from the GitHub repository).
from llmlingua import PromptCompressor
llm_lingua = PromptCompressor()
compressed_prompt = llm_lingua.compress_prompt(immediate, instruction="", query="", target_token=200)
# > {'compressed_prompt': 'Query: Sam purchased a dozen containers, every with 30 highlighter pens inside, for $10 every field. He reanged 5 of containers into packages of sixlters every and offered them $3 per. He offered the remainder theters individually on the of three pens $2. How a lot did make in whole, {dollars}?nLets assume step stepnSam purchased 1 containers x00 oflters.nHe purchased 12 * 300ters in totalnSam then took 5 containers 6ters0ters.nHe offered these containers for five *5nAfterelling these containers there have been 3030 highlighters remaining.nThese kind 330 / 3 = 110 teams of three pens.nHe offered every of those teams for $2 every, so made 110 * 2 = $220 from them.nIn whole, then, he earned $220 + $15 = $235.nSince his unique value was $120, he earned $235 - $120 = $115 in revenue.nThe reply is 115',
# 'origin_tokens': 2365,
# 'compressed_tokens': 211,
# 'ratio': '11.2x',
# 'saving': ', Saving $0.1 in GPT-4.'}
## Or use the phi-2 mannequin,
llm_lingua = PromptCompressor("microsoft/phi-2")
## Or use the quantation mannequin, like TheBloke/Llama-2-7b-Chat-GPTQ, solely want
Â
There at the moment are so many helpful immediate engineering sources extensively obtainable. That is however a small style of what’s on the market, simply ready to be explored. In bringing you this small pattern, I hope that you’ve discovered at the very least one in all these sources helpful.
Comfortable prompting!
Â
Â
Matthew Mayo (@mattmayo13) holds a Grasp’s diploma in pc science and a graduate diploma in knowledge mining. As Managing Editor, Matthew goals to make complicated knowledge science ideas accessible. His skilled pursuits embody pure language processing, machine studying algorithms, and exploring rising AI. He’s pushed by a mission to democratize data within the knowledge science group. Matthew has been coding since he was 6 years previous.