Technical details
Tech Stack
Built with Tailwind CSS. The static HTML/CSS layouts were generated using Stitch to speed up UI development, allowing me to focus on the logic like chat interaction, quiz generation and parsing, and progress tracking. Chart.js was used to make the beautiful and interactive charts.
Why use AI for the static HTML and CSS? I already know HTML and CSS, using AI to generate it sped up the process and ensured I had time for the backend logic, demo, and documentation, which is crucial as I am doing the hackathon solo.
I chose Django as the Python framework for this project. Using Django over simpler, faster frameworks like FastAPI or Flask was done as I was most comfortable with Django and it made database management really friction-free.
Along with that, libraries like Markdown It, PyMuPdf, Python-docx, and Functools have been utilized for displaying the LLM's response in a structured manner, reading and extracting text from pdf and .docx files, and making wrapper functions for views where login is required respectively.
The authentication is handled using Supabase whereas the user's profile and quizzes are stored in a local Postgres DB due to technical issues with Supabase's RLS policies.
Saathi itself is powered by a Gemini 2.5 flash model with a system prompt that ensures customizability and reliability in answers. Gemini 2.5 flash was chosen over other models even though it has 8-10 seconds of latency rates as it is known to give the best response quality when it comes to Gemini models.
For the quiz generation, a Gemini 2.5 flash lite model is being utilized to ensure speed. How is it certain that the response will be structured as we desire and assume in other parts of the code?
The "response_mime_type" allows Gemini to structure the quizzes as mentioned in the system prompt and return a JSON object only.
response = client.models.generate_content(
model="gemini-2.5-flash-lite",
contents=prompt,
config={
"response_mime_type": "application/json" # ensures JSON output
}
)
Quiz Generation Flow Diagram
Content gets parsed
After they click the button to generate the quiz, the backend uses format-specific libraries to produce the chunks of text that can be passed to the LLM. As said above,
PDF - PyMuPdf
.docx - Python-docx
.txt file - Python's file handling (
with open())Text from the text area - Direct string concatenation
Visualized using Chart.js
Whether they performed well or not, seeing how underwater you really are is a motivational boost for a lot of people which keeps them going, even when STEM is throwing complex math at your face. Specifically, the page has,
Line graph showing accuracy trends over last 10/15/20/25/30 quizzes
Overall stats card displaying: total quizzes, questions answered, current streak, improvement %, average accuracy, best accuracy (in the last 30 days)
Last updated