Build an AI Chatbot with Gemini, Django & React: Complete 2025 Tutorial
Build Your Own AI Chatbot with Gemini, Django & React: A 2025 Full-Stack Guide
Imagine having a conversational AI assistant that you built from the ground up, running on your own server and tailored to your needs. In this 2025 tutorial, we’re going to turn that imagination into reality. We’ll construct a sleek, full-stack AI chatbot application that integrates Google’s cutting-edge Gemini AI model, powers it with a robust Django backend, and presents it through a dynamic, modern React frontend.
This isn’t just about calling an API; it’s about architecting a complete application. You’ll learn how to structure a Django REST API, securely manage API keys, handle asynchronous AI requests, and build a reactive frontend component. By the end, you’ll have a portfolio-worthy project that demonstrates proficiency in three of the most in-demand technologies in web development.
What You’ll Build: A single-page application with a real-time chat interface where users can type questions and receive intelligent, streaming responses from Google’s Gemini Pro model, with a full history of the conversation maintained in the browser.
Why This Stack? Gemini, Django, and React
Google Gemini is a formidable family of AI models, excelling in reasoning, code generation, and multimodal understanding. For our chatbot, we’ll use the gemini-2.5-flash model, which offers an excellent balance of intelligence, speed, and affordability via Google AI Studio.
Django serves as our secure and scalable backend fortress. Its role is crucial: it protects our sensitive Gemini API key (which should never be exposed in frontend code), structures the request/response logic, and can easily be extended with databases, user authentication, and more complex business logic.
React provides the instantaneous, component-based user interface. It allows our chat interface to feel alive-messages appear as they are generated, the input field clears seamlessly, and the chat history updates without requiring a page refresh. It’s the perfect tool for crafting a modern, single-page application (SPA) experience.
To watch the full tutorial on YouTube, click here.
Prerequisites and Project Setup
Before we write our first line of code, let’s ensure our development environment is ready. You’ll need the following installed:
- Python 3.9+ and pip
- Node.js 18+ and npm
- A Google AI Studio API Key (free tier available)
- A code editor like VS Code
1. Get Your Gemini API Key
Head over to Google AI Studio. Sign in with your Google account, click “Create API Key,” and generate a new key. Copy it and keep it safe-we’ll use it shortly.
Security First: Treat this key like a password. We will never hardcode it in our frontend React app. It will be stored securely in our Django backend’s environment variables.
2. Create Project Structure
Open your terminal and create a base project directory. We’ll have two main sub-projects: backend/ (Django) and frontend/ (React).
mkdir ai-chatbot-project
cd ai-chatbot-project
mkdir backend frontend
Part 1: Building the Django Backend API
Our backend has one primary job: receive a user’s message from the React frontend, send it securely to the Gemini API, and stream the AI’s response back.
1. Set Up the Django Project and Environment
Navigate to the backend directory and set up a Python virtual environment.
cd backend
python -m venv venv
# On macOS/Linux:
source venv/bin/activate
# On Windows:
# venv\Scripts\activate
Now, install the required Python packages. We’ll need Django, the Django REST Framework for our API, the Google Generative AI library, and CORS headers to allow our React app to communicate with the Django server.
pip install django djangorestframework google-generativeai django-cors-headers
Next, create a new Django project and a dedicated app for our chat API.
django-admin startproject core .
python manage.py startapp chatapi
2. Configure Django Settings
Open backend/core/settings.py. We need to register our new app and configure security settings for development.
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
# Third-party apps
'rest_framework',
'corsheaders',
# Local app
'chatapi',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'corsheaders.middleware.CorsMiddleware', # Add this line - high priority
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
# For development, allow all origins. Restrict this in production!
CORS_ALLOW_ALL_ORIGINS = True
# Set your Gemini API Key as an environment variable for security
# For now, we'll set a default. Use `os.getenv('GEMINI_API_KEY')` in production.
import os
GEMINI_API_KEY = os.getenv('GEMINI_API_KEY', 'YOUR_API_KEY_HERE')
Important: The
corsheaders.middleware.CorsMiddlewaremust be placed as high as possible in the middleware list, ideally beforeCommonMiddleware. This ensures the CORS headers are added to responses, allowing your React dev server (usually onlocalhost:3000) to talk to Django (localhost:8000).
3. Create the API View and URL
First, let’s define a simple view in chatapi/views.py that will handle the chat logic.
import google.generativeai as genai
from django.conf import settings
from rest_framework.views import APIView
from rest_framework.response import Response
from rest_framework import status
import json
# Configure the Gemini API with our key
genai.configure(api_key=settings.GEMINI_API_KEY)
class ChatAPIView(APIView):
"""
API endpoint to handle chat messages with the Gemini AI model.
Accepts a POST request with a 'message' field.
Returns a streaming HTTP response with the AI's generated text.
"""
def post(self, request):
user_message = request.data.get('message', '')
if not user_message:
return Response(
{'error': 'Message field is required.'},
status=status.HTTP_400_BAD_REQUEST
)
try:
# Initialize the Gemini model
model = genai.GenerativeModel('gemini-2.5-flash')
# Generate a response with streaming enabled
response = model.generate_content(
user_message,
stream=True
)
# We'll stream the response back to the client
def event_stream():
for chunk in response:
if chunk.text:
yield f"data: {json.dumps({'text': chunk.text})}\n\n"
# Return a StreamingHttpResponse for Server-Sent Events (SSE)
from django.http import StreamingHttpResponse
return StreamingHttpResponse(
event_stream(),
content_type='text/event-stream'
)
except Exception as e:
return Response(
{'error': str(e)},
status=status.HTTP_500_INTERNAL_SERVER_ERROR
)
Now, connect this view to a URL. Create chatapi/urls.py:
from django.urls import path
from .views import ChatAPIView
urlpatterns = [
path('chat/', ChatAPIView.as_view(), name='chat_api'),
]
And include it in the project’s main core/urls.py:
from django.contrib import admin
from django.urls import path, include
urlpatterns = [
path('admin/', admin.site.urls),
path('api/', include('chatapi.urls')),
]
4. Test the Backend API
Run the Django development server to ensure our API is working. First, set your actual Gemini API key as an environment variable.
# In your terminal, set the key (macOS/Linux)
export GEMINI_API_KEY="your_actual_key_here"
# On Windows (Command Prompt):
# set GEMINI_API_KEY=your_actual_key_here
# On Windows (PowerShell):
# $env:GEMINI_API_KEY="your_actual_key_here"
python manage.py runserver
You can test the API using a tool like curl or Postman. Open a new terminal and run:
curl -X POST http://127.0.0.1:8000/api/chat/ \
-H "Content-Type: application/json" \
-d '{"message": "Explain quantum computing in one sentence."}' \
--no-buffer
You should see a stream of Server-Sent Events (SSE) containing the AI’s response. If so, your backend is ready!
Part 2: Crafting the React Frontend
Now, let’s build the user interface. Our React app will provide a chat window, manage the conversation state, and handle the communication with our Django API.
1. Initialize the React App
Navigate to the frontend directory and create a new React application using Vite (a faster, modern alternative to Create React App).
cd ../frontend
npm create vite@latest . -- --template react
npm install
We’ll also install Axios for making HTTP requests and a simple CSS framework for styling (like Bootstrap or, in our case, we’ll use a custom CSS file).
npm install axios
2. Build the Core Chat Components
We’ll create two main components: ChatInterface.jsx (the main container) and Message.jsx (to display individual messages). Let’s start with the main component. Replace the contents of src/App.jsx.
import { useState, useRef, useEffect } from 'react';
import axios from 'axios';
import './App.css';
function App() {
const [messages, setMessages] = useState([]);
const [inputText, setInputText] = useState('');
const [isLoading, setIsLoading] = useState(false);
const messagesEndRef = useRef(null);
// API endpoint - adjust if your Django server is on a different port
const API_URL = 'http://127.0.0.1:8000/api/chat/';
// Function to scroll to the bottom of the chat
const scrollToBottom = () => {
messagesEndRef.current?.scrollIntoView({ behavior: 'smooth' });
};
useEffect(() => {
scrollToBottom();
}, [messages]);
const handleSendMessage = async () => {
const userMessage = inputText.trim();
if (!userMessage || isLoading) return;
// Add user message to the UI immediately
const newUserMessage = { sender: 'user', text: userMessage };
setMessages((prev) => [...prev, newUserMessage]);
setInputText('');
setIsLoading(true);
// Add a placeholder for the AI's streaming response
const aiMessageId = Date.now(); // Simple unique ID
const newAiMessage = { sender: 'ai', text: '', id: aiMessageId };
setMessages((prev) => [...prev, newAiMessage]);
try {
// Use Axios with `responseType: 'stream'` for Server-Sent Events
const response = await fetch(API_URL, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ message: userMessage }),
});
if (!response.body) {
throw new Error('ReadableStream not supported in this browser.');
}
const reader = response.body.getReader();
const decoder = new TextDecoder();
let aiText = '';
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
// Parse Server-Sent Events
const lines = chunk.split('\n');
for (const line of lines) {
if (line.startsWith('data: ')) {
try {
const data = JSON.parse(line.substring(6));
aiText += data.text;
// Update the specific AI message with the new text
setMessages((prev) =>
prev.map((msg) =>
msg.id === aiMessageId ? { ...msg, text: aiText } : msg
)
);
} catch (e) {
console.error('Error parsing SSE data:', e);
}
}
}
}
} catch (error) {
console.error('Error calling the API:', error);
// Update the AI message with an error
setMessages((prev) =>
prev.map((msg) =>
msg.id === aiMessageId
? { ...msg, text: 'Error: Could not get a response.' }
: msg
)
);
} finally {
setIsLoading(false);
}
};
const handleKeyPress = (e) => {
if (e.key === 'Enter' && !e.shiftKey) {
e.preventDefault();
handleSendMessage();
}
};
return (
<div className="app-container">
<header className="app-header">
<h1>🧠 Gemini AI Chatbot</h1>
<p>Powered by Django & React</p>
</header>
<div className="chat-container">
<div className="messages-window">
{messages.map((msg, index) => (
<div
key={index}
className={`message-bubble ${msg.sender}`}
>
<div className="sender-label">
{msg.sender === 'user' ? 'You' : 'Gemini AI'}
</div>
<div className="message-text">
{msg.text || (msg.sender === 'ai' && isLoading ? 'Thinking...' : '')}
</div>
</div>
))}
<div ref={messagesEndRef} />
</div>
<div className="input-area">
<textarea
value={inputText}
onChange={(e) => setInputText(e.target.value)}
onKeyDown={handleKeyPress}
placeholder="Ask me anything..."
disabled={isLoading}
rows="3"
/>
<button
onClick={handleSendMessage}
disabled={isLoading || !inputText.trim()}
>
{isLoading ? 'Sending...' : 'Send'}
</button>
</div>
</div>
<footer className="app-footer">
<p>
Built with <a href="https://halogeniusideas.com">Halogenius Ideas</a>. Responses are generated by Google's Gemini AI.
</p>
</footer>
</div>
);
}
export default App;
3. Style the Application with CSS
Replace the contents of src/App.css with the following styles to create a clean, modern chat interface.
* {
box-sizing: border-box;
margin: 0;
padding: 0;
}
body {
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, sans-serif;
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
min-height: 100vh;
display: flex;
justify-content: center;
align-items: center;
padding: 20px;
}
.app-container {
width: 100%;
max-width: 900px;
background: rgba(255, 255, 255, 0.95);
border-radius: 24px;
box-shadow: 0 20px 60px rgba(0, 0, 0, 0.3);
overflow: hidden;
display: flex;
flex-direction: column;
height: 85vh;
}
.app-header {
background: linear-gradient(to right, #4f46e5, #7c3aed);
color: white;
padding: 2rem;
text-align: center;
}
.app-header h1 {
font-size: 2.5rem;
margin-bottom: 0.5rem;
font-weight: 700;
}
.app-header p {
font-size: 1rem;
opacity: 0.9;
}
.chat-container {
display: flex;
flex-direction: column;
flex: 1;
padding: 1.5rem;
}
.messages-window {
flex: 1;
overflow-y: auto;
padding: 1rem;
background: #f8fafc;
border-radius: 12px;
margin-bottom: 1.5rem;
border: 1px solid #e2e8f0;
display: flex;
flex-direction: column;
gap: 1rem;
}
.message-bubble {
max-width: 80%;
padding: 1rem 1.25rem;
border-radius: 18px;
line-height: 1.5;
word-wrap: break-word;
animation: fadeIn 0.3s ease;
}
@keyframes fadeIn {
from { opacity: 0; transform: translateY(10px); }
to { opacity: 1; transform: translateY(0); }
}
.message-bubble.user {
align-self: flex-end;
background: #4f46e5;
color: white;
border-bottom-right-radius: 4px;
}
.message-bubble.ai {
align-self: flex-start;
background: #f1f5f9;
color: #1e293b;
border-bottom-left-radius: 4px;
border: 1px solid #e2e8f0;
}
.sender-label {
font-size: 0.75rem;
font-weight: 600;
margin-bottom: 0.25rem;
opacity: 0.8;
text-transform: uppercase;
}
.message-text {
white-space: pre-wrap;
font-size: 1rem;
}
.message-text::after {
content: '';
display: inline-block;
width: 8px;
height: 16px;
background-color: currentColor;
margin-left: 2px;
animation: blink 1s infinite;
vertical-align: baseline;
}
@keyframes blink {
0%, 100% { opacity: 0; }
50% { opacity: 1; }
}
.input-area {
display: flex;
gap: 1rem;
align-items: flex-end;
}
.input-area textarea {
flex: 1;
padding: 1rem;
border: 2px solid #cbd5e1;
border-radius: 12px;
font-size: 1rem;
font-family: inherit;
resize: none;
transition: border-color 0.2s;
}
.input-area textarea:focus {
outline: none;
border-color: #4f46e5;
}
.input-area button {
padding: 1rem 2rem;
background: linear-gradient(to right, #10b981, #34d399);
color: white;
border: none;
border-radius: 12px;
font-size: 1rem;
font-weight: 600;
cursor: pointer;
transition: transform 0.2s, box-shadow 0.2s;
}
.input-area button:hover:not(:disabled) {
transform: translateY(-2px);
box-shadow: 0 5px 15px rgba(16, 185, 129, 0.4);
}
.input-area button:disabled {
background: #94a3b8;
cursor: not-allowed;
transform: none;
}
.app-footer {
text-align: center;
padding: 1.5rem;
color: #64748b;
font-size: 0.9rem;
border-top: 1px solid #e2e8f0;
background: #f8fafc;
}
.app-footer a {
color: #4f46e5;
text-decoration: none;
font-weight: 600;
}
.app-footer a:hover {
text-decoration: underline;
}
/* Responsive adjustments */
@media (max-width: 768px) {
.app-container {
height: 95vh;
border-radius: 16px;
}
.app-header h1 {
font-size: 1.8rem;
}
.message-bubble {
max-width: 90%;
}
}
4. Configure Frontend Proxy (Optional but Recommended)
To avoid CORS issues during development and simplify API calls, you can configure Vite to proxy requests to the Django backend. Create or modify frontend/vite.config.js:
import { defineConfig } from 'vite';
import react from '@vitejs/plugin-react';
export default defineConfig({
plugins: [react()],
server: {
proxy: {
'/api': {
target: 'http://127.0.0.1:8000',
changeOrigin: true,
},
},
},
});
With this, you can change your API_URL in App.jsx to simply '/api/chat/', and Vite will forward the requests to Django seamlessly.
Part 3: Running the Full-Stack Application
The moment of truth! We need to run both the Django backend and the React frontend simultaneously.
1. Start the Backend Server
In your first terminal, navigate to the backend directory, activate the virtual environment, ensure your API key is set, and run the server.
cd backend
source venv/bin/activate # On Windows: venv\Scripts\activate
export GEMINI_API_KEY="your_key_here" # Set the key again
python manage.py runserver
Your Django API should now be running on http://127.0.0.1:8000.
2. Start the Frontend Development Server
Open a second terminal, navigate to the frontend directory, and start the React app.
cd frontend
npm run dev
Vite will start a server, typically on http://localhost:5173. Open this address in your browser.
3. Test Your AI Chatbot
You should now see your beautifully styled chat interface. Type a question like “What is the meaning of life?” or “Write a Python function to calculate factorial” and hit Send. Watch as the Gemini AI’s response streams in real-time, word by word!
Troubleshooting: If you see a network error, check your browser’s Developer Console (F12). Ensure both servers are running, your API key is valid, and there are no CORS errors. If you used the proxy config, verify the Vite server restarted after the change.
Enhancements and Next Steps
Congratulations! You have a fully functional AI chatbot. Here are ideas to level up your project:
- Add Conversation History: Integrate a database (like PostgreSQL) with Django to save and retrieve past conversations per user.
- Implement User Authentication: Use Django REST Framework’s token authentication or Django-Allauth to allow user accounts.
- Improve the UI: Add features like message timestamps, markdown rendering for AI code snippets, a “regenerate response” button, and dark mode.
- Experiment with Models: Try Gemini 2.5 Flash for speed, or explore other models like GPT-4 via OpenAI’s API by adapting the backend view.
- Deploy to Production: Use a platform like Vercel for the React frontend and Railway or Render for the Django backend. Remember to set your
GEMINI_API_KEYas a secure environment variable on your hosting platform.
Conclusion
You’ve successfully built a sophisticated, full-stack AI application. You’ve learned how to securely interface with a powerful LLM via an API, structure a Django REST backend, and create a dynamic, reactive frontend with React. This project showcases a critical modern development skill: integrating disparate technologies into a cohesive, functional product.
To watch the full tutorial on YouTube, click here.
The principles you’ve mastered here-API integration, state management, streaming data, and component design-are directly transferable to countless other applications. Keep experimenting, and don’t forget to check Halogenius Ideas for more tutorials that bridge the gap between innovative concepts and practical, buildable skills. Happy coding!
Leave a Reply