- Pedro Pini
- Posts
- Integrate Gemini AI into NextJS
Integrate Gemini AI into NextJS
I’ve been experimenting with Gemini in my Next.js project ScaleNext and found two ways to integrate it effectively. In this post, I’ll walk you through both methods, explain the pros and cons of each, and help you decide which might be better suited for your needs.
Method 1: Using @google/generative-ai
The first method involves using the official @google/generative-ai
package. This approach is relatively straightforward because there’s already a wealth of examples and documentation available online. If you're just getting started with Gemini, this could be a good entry point.
Pros:
Abundant Resources: Plenty of tutorials and examples are available, making it easy to set up.
Official Integration: You’re using a package developed by Google, ensuring you're working with well-maintained and reliable code.
Cons:
Verbose Setup: You’ll need to manually handle tokens and manage message flow between the app and Gemini, which can add complexity.
Limited Knowledge Integration: One downside is the uncertainty around feeding the AI with custom knowledge, like specific PDFs or other file types. Handling file inputs or domain-specific data may require additional customization or workarounds.
/**
API file
**/
// Import `GoogleGenerative` from the package we installed earlier.
import { GoogleGenerativeAI } from '@google/generative-ai';
import { NextResponse } from 'next/server';
// Create an asynchronous function POST to handle POST
// request with parameters request and response.
export async function POST(req) {
try {
// Access your API key by creating an instance of GoogleGenerativeAI
const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY);
// Initialize a generative model
const model = genAI.getGenerativeModel({ model: 'gemini-pro' });
// Retrieve the data we receive as part of the request body
const data = await req.json();
// Define a prompt variable
const prompt = data.body;
// Pass the prompt to the model and retrieve the output
const result = await model.generateContent(prompt);
const response = await result.response;
const output = await response.text();
// Send the LLM output as a server response object
return NextResponse.json({ output: output });
} catch (error) {
console.error(error);
// Return an error response with status 500 and an error message
return NextResponse.json({ error: 'An error occurred while generating content' }, { status: 500 });
}
}
/**
src/app/(saas)/gemini/page.tsx
**/
'use client'
import { useChat } from 'ai/react'
import { useState } from 'react'
import { Button } from '@/components/ui/button'
import { Card, CardContent, CardDescription, CardFooter, CardHeader, CardTitle } from '@/components/ui/card'
import { Input } from '@/components/ui/input'
export default function Component() {
const [input, setInput] = useState('')
const [response, setResponse] = useState('')
const [isLoading, setIsLoading] = useState(false)
const [error, setError] = useState<string | null>(null)
const generateResponse = async () => {
setIsLoading(true)
setError(null)
try {
const res = await fetch('/api/gemini', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ body: input }),
})
if (!res.ok) {
throw new Error('Failed to fetch response')
}
const data = await res.json()
setResponse(data.output)
} catch (err) {
console.error('Error:', err)
setError('An error occurred while generating the response. Please try again.')
} finally {
setIsLoading(false)
}
}
return (
<Card className="w-full max-w-3xl">
<CardHeader>
<CardTitle>Gemini Q&A</CardTitle>
<CardDescription>Ask a question and get an answer from Gemini</CardDescription>
</CardHeader>
<CardContent>
<div className="flex space-x-2">
<Input
placeholder="Enter your question"
value={input}
onChange={(e) => setInput(e.target.value)}
onKeyPress={(e) => {
if (e.key === 'Enter') {
generateResponse()
}
}}
/>
<Button onClick={generateResponse} disabled={isLoading || !input.trim()}>
{isLoading ? 'Asking...' : 'Ask'}
</Button>
</div>
</CardContent>
<CardFooter>
<div className="w-full">
<h3 className="text-lg font-semibold mb-2">Response:</h3>
{isLoading ? (
<div className="bg-muted p-4 rounded-md">Loading...</div>
) : error ? (
<div className="bg-red-100 text-red-800 p-4 rounded-md">{error}</div>
) : response ? (
<div className="bg-muted p-4 rounded-md whitespace-pre-wrap">
{response}
</div>
) : (
<div className="bg-muted p-4 rounded-md">
Your answer will appear here
</div>
)}
</div>
</CardFooter>
</Card>
)
}
Method 2: Using the Next.js SDK
The second method involves leveraging the Next.js SDK to integrate Gemini. This approach is straightforward and comes with several powerful features right out of the box.
Key Features:
Customizable Provider Instances: You can easily configure the Gemini integration to suit your needs, whether you're working with standard APIs or need more advanced setups.
Text Generation Models: It supports advanced models, such as
gemini-1.5-pro-latest
, for generating high-quality text.File Inputs: Unlike the
@google/generative-ai
method, this SDK makes it simpler to incorporate file inputs like PDFs, enabling you to provide custom knowledge to the AI.Content Caching: Built-in options for caching allow for faster responses and more efficient performance in production environments.
Model Fine-Tuning: You can fine-tune models to better fit your specific use cases, making the integration more powerful for domain-specific tasks.
Embeddings and Structured Outputs: The SDK also supports embedding models and generating structured outputs, giving you more control over how the AI responds to different types of inputs.
Safety Settings: You can configure safety parameters to control the behavior and outputs of the models, ensuring they align with your application’s needs.
Pros:
Flexible and Feature-Rich: The Next.js SDK provides more flexibility and advanced features, particularly around file handling, fine-tuning, and model customization.
Better for Complex Applications: If you're building a project that requires more control over AI inputs and outputs, this method is a better fit.
Cons:
Less Documentation: Compared to the
@google/generative-ai
package, the Next.js SDK may have fewer examples available, which could make initial setup more challenging for beginners.More Complexity: With advanced features comes more configuration. This method might be a bit overkill if you're just looking for a simple integration.
import { createGoogleGenerativeAI } from '@ai-sdk/google';
import { NextRequest, NextResponse } from 'next/server';
import { generateStreamResponse } from '@/app/api/gemini/generate-stream-response'
import { generateTextResponse } from '@/app/api/gemini/generate-text-response'
const google = createGoogleGenerativeAI({
apiKey: process.env.GEMINI_API_KEY
});
const googleAI = google('gemini-1.5-pro-latest');
export async function POST(req: NextRequest) {
try {
const { messages } = await req.json();
const lastMessage = messages[messages.length - 1];
const prompt = lastMessage.content;
//generateTextResponse you just need change the ui to do a fetch.
// const response = await generateTextResponse(googleAI, prompt)
// return NextResponse.json(response, { status: 200 })
// Use streamText to generate and stream the response
return generateStreamResponse(googleAI, messages)
} catch (error) {
console.error('Error generating content:', error);
return new NextResponse('An error occurred while generating content', { status: 500 });
}
}
//generate-stream-response.ts
import { streamText } from 'ai';
/**
* Generates a stream response using the specified AI model and messages.
*
* @param {any} googleAI - The AI model to use for generating the stream response.
* @param {any[]} messages - An array of message objects to provide to the AI model.
* @returns {Promise<any>} - A promise that resolves to the stream response.
*/
export async function generateStreamResponse(googleAI: any, messages: any[]): Promise<any> {
const stream = await streamText({
model: googleAI,
messages: messages.map((message: any) => ({
role: message.role,
content: message.content,
})),
});
// Convert the response to a ReadableStream
return stream.toDataStreamResponse();
}
//generate-text-response
import { generateText } from 'ai';
/**
* Generates text using the specified AI model and parses the response as JSON.
* You need to adjust the page to fetch like an API, instead of a chat stream
*
* @param {any} googleAI - The AI model to use for generating text.
* @param {string} prompt - The prompt to provide to the AI model.
* @returns {Promise<any>} - A promise that resolves to the response.
*/
export async function generateTextResponse(googleAI: any, prompt: string): Promise<any> {
const { text } = await generateText({
model: googleAI,
prompt
});
return text;
}
'use client'
import { useChat, Message } from 'ai/react'
import { Button } from '@/components/ui/button'
import { Card, CardContent, CardDescription, CardFooter, CardHeader, CardTitle } from '@/components/ui/card'
import { Input } from '@/components/ui/input'
export default function GeminiChat() {
const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat({
api: '/api/gemini',
})
return (
<Card className="w-full max-w-3xl">
<CardHeader>
<CardTitle>Gemini Q&A</CardTitle>
<CardDescription>Ask a question and get an answer from Gemini</CardDescription>
</CardHeader>
<CardContent>
<form onSubmit={handleSubmit} className="flex space-x-2">
<Input
placeholder="Enter your question"
value={input}
onChange={handleInputChange}
/>
<Button type="submit" disabled={isLoading || !input.trim()}>
{isLoading ? 'Asking...' : 'Ask'}
</Button>
</form>
</CardContent>
<CardFooter>
<div className="w-full">
<h3 className="text-lg font-semibold mb-2">Conversation:</h3>
<div className="bg-muted p-4 rounded-md whitespace-pre-wrap space-y-4">
{messages.map((message: Message, index: number) => (
<div key={index} className={`${message.role === 'user' ? 'text-blue-600' : 'text-green-600'}`}>
<strong>{message.role === 'user' ? 'You: ' : 'Gemini: '}</strong>
{message.content}
</div>
))}
</div>
{messages.length === 0 && (
<div className="bg-muted p-4 rounded-md">
Your conversation will appear here
</div>
)}
</div>
</CardFooter>
</Card>
)
}
Which Method is Best for You?
If you want a quick and simple integration with plenty of online resources, Method 1 (using
@google/generative-ai
) is probably the best choice.If you need more advanced features, such as fine-tuning, file inputs, and caching, and you're comfortable with a bit more setup, Method 2 (using the Next.js SDK) offers far more flexibility.
You can have the working functionality at https://github.com/PedroPini/ScaleNext
Reply