How I Built a Language Translator Using LangChain and Few-Shot Learning

Ravjot Singh
3 min readAug 18, 2024

--

In this project, I explored how to use LangChain’s tools with Google Generative AI to create dynamic text-generation systems. We covered everything from creating prompt templates to building few-shot examples, and even constructing a language translator. Let’s go step-by-step and dive deeper into how these components work together.

Setting Up the Project Environment

First, I integrated Google Generative AI using the langchain_google_genai library, which allowed me to access the model and generate custom outputs.

from langchain_google_genai import GoogleGenerativeAI

api_key = "your api key"# Replace with your actual API key
llm = GoogleGenerativeAI(model="models/text-bison-001", google_api_key=api_key)

Designing Custom Prompt Templates

Prompt templates provide structure and consistency in the outputs generated by the model. For example, I created a prompt template that instructs the model to explain a topic in simple terms as if it were a teacher.

from langchain_core.prompts import PromptTemplate

template = ''' I want you to act as a classroom teacher for students. In an easy way, explain the basics of {topic}.'''

prompt = PromptTemplate(
input_variables=['topic'],
template=template
)

After giving the input to the topic "machine learning":

print(prompt.format(topic='machine learning'))

The output is:

I want you to act as a classroom teacher for students. In an easy way, explain the basics of machine learning.

This format is especially useful for educational content where the goal is to simplify complex concepts.

Building a Few-Shot Learning Example for Antonyms

Few-shot learning helps the model generate better outputs by providing a few example cases. I used this technique to create a system that identifies antonyms.

First, I created example pairs of words and their antonyms:

examples = [
{"word": 'tall', "antonym": 'short'},
{"word": 'big', "antonym": 'small'}
]

I formatted these examples using a template:

example_formatter_template = """word: {word}\nantonym: {antonym} """
example_prompt = PromptTemplate(
input_variables=['word', 'antonym'],
template=example_formatter_template
)

Next, I combined these examples into a FewShotPromptTemplate, which includes a prefix and suffix for additional context:

from langchain_core.prompts import FewShotPromptTemplate

few_shot = FewShotPromptTemplate(
examples=examples,
example_prompt=example_prompt,
prefix='Give me the antonym of the given word',
suffix="Word: {input} Antonym:",
input_variables=['input']
)

For instance, when I input "happy":

print(few_shot.format(input='happy'))

The output is:

Give me the antonym of the given word

word: tall
antonym:shortword: big
antonym: small

Word: happy Antonym:

This structure ensures that the model understands the context and can provide the correct antonym for new words.

Creating a Language Translator

In this third example, I built a simple language translator using LangChain. The translator converts English words into French by leveraging prompt engineering and few-shot learning.

Setting Up the Language Translator Example

First, I created examples that map English words to their French translations:

examples = [
{"english": "hello", "french": "bonjour"},
{"english": "goodbye", "french": "au revoir"},
]

These examples help the model understand the pattern for translation. I formatted the examples using another prompt template:

example_template = "English: {english}\nFrench: {french}"
example_prompt = PromptTemplate(
input_variables=['english', 'french'],
template=example_template
)

Creating the Translation Chain

Using a FewShotPromptTemplate, I structured the translation prompt:

translator_prompt = FewShotPromptTemplate(
examples=examples,
example_prompt=example_prompt,
prefix="Translate the following English word to French:",
suffix="English: {input}\nFrench:",
input_variables=['input']
)

For example, if the input is "thank you," the formatted prompt looks like this:

print(translator_prompt.format(input='thank you'))

Output:

Translate the following English word to French:

English: hello
French: bonjour

English: goodbye
French: au revoir

English: thank you
French:

Building the Translation Chain

Finally, I connected the prompt template to the language model using LLMChain, allowing the model to perform the translation:

chain = LLMChain(llm=llm, prompt=translator_prompt)
translation = chain({'input': 'thank you'})
print(translation)

Output:

{'input': 'thank you', 'text': 'merci'}

The model correctly translates "thank you" to "merci," demonstrating the effectiveness of few-shot learning combined with prompt templates.

Conclusion

This project demonstrates the flexibility and power of combining prompt engineering, few-shot learning, and LangChain for various use cases—from simple explanations and antonym generation to language translation. Whether you're building educational content or creating multilingual systems, these techniques can help you guide models towards more consistent and relevant outputs.

This brings us to the end of this article. I hope you found this guide useful and that it helps you on your journey to mastering prompt engineering and LangChain. Remember, practice is key to truly understanding these concepts and applying them effectively.

If you’re interested in more resources related to Data Science, Machine Learning, and Generative AI, feel free to explore my GitHub account.

Let’s connect on LinkedIn — Ravjot Singh.

--

--

Ravjot Singh

A Tech enthusiast || Dedicated and hardworking with a passion for Data Science || Codes in Python & R.