Insight
28 January 2026
Help government officers analyse surveys in minutes instead of days, so they can improve public services faster.

Opportunity
Problem Statement
Government officers spend 3-5 days manually analysing survey responses that could be processed in minutes. This delays improvements to services like ActiveSG, ScamShield, and Community Hackathons, where citizen feedback directly shapes decisions.
Research
We surveyed 20 officers from OGP and MHA, mostly designers, researchers, and operations officers. Here are the key findings from our research:
80% use Google Sheets for survey responses and demographic data
75% find the survey workflow challenging, especially data cleanup, analysis, and visualisation
Officers can collect data easily but struggle with what comes after, such as cleaning, analysing, and presenting findings
Most rely on multiple disconnected tools and rarely use statistical analysis to determine if trends are significant enough to act on
Critical Pain Points
Manual categorisation is extremely time-consuming. Officers spend hours organising responses into themes, creating backlogs of unanalysed data.
Data scattered across multiple tools. When analysing feedback for products like ActiveSG, officers juggle Google Sheets, FormSG, FigJam, and Notion. Data gets copied multiple times, creating version control issues.
Messy data requires extensive cleanup. Duplicates, invalid responses, and inconsistent formatting must be fixed before analysis can begin.
No clear significance thresholds. Officers can't tell if findings represent meaningful patterns or random variation.
AI trust issues. Officers hesitate to use AI tools because they can't verify confidence levels or review outputs, plus concerns about sharing sensitive data externally.
Limited analysis capabilities. Existing tools don't easily support complex questions like correlating qualitative feedback with demographics or identifying patterns across different user groups.
Velocity
What We Built This Month
In January, we built a working prototype that lets officers upload survey data and get AI-powered insights in minutes instead of days, without switching between multiple tools.
Current Features That Work
Classify Feature: Officers can upload CSV files and use AI to automatically categorises open-ended responses into custom labels like "technical issues" or "UX problems." The system handles multiple labels per response and processes hundreds of responses instantly.
Analysis Feature: Officers ask questions like "What are the main complaints?" and get statistical analysis with insights and recommendations, replacing manual pivot tables and pattern hunting.
Charts Feature: Users can prompt the LLM to generate visualisations of classified data and analysis results to spot trends and share findings with stakeholders.
What We're Still Refining
We're refining our AI prompts to make outputs more readable and actionable for government officers. We're also working on UI/UX improvements and scalability for classification to handle larger datasets and more users seamlessly. To help users onboard easily, we're creating a guide with walkthrough demos and tips on how to maximise the use of Insight.
Try out Insight here: https://insight.hack2026.gov.sg/
Traction
Real Usage and Results
We tested our prototype with real user feedback data, achieving significant time savings.
200 issues reported on the ScamShield feedback channel classified in ~1 minute and complete trend analysis finished in ~ 3 minutes.
Total time saved: ~4 days reduced to 4 minutes i.e. ~99% reduction in analysis time.
ActiveSG designer analysed two datasets (100-155 rows each) using summary and sentiment analysis features, completing work that would normally take hours manually.
I love it already, I think it has a lot of potential and I will definitely use it moving forward. I love that there are a lot of prompts along the way so for me as a noob prompt engineer I can just press the suggestions. This summary is exactly what I need.— ActiveSG Designer
Growing Interest
42 government officers registered their interest to try Insight (Beta) during hackathon demo day, with 75% working in education and health sectors.
We are planning a 6-8 month pilot with ~100 expected users to measure broader impact and value before scaling the product further.
Post-hackathon, we hope to secure operational costs to run this extended pilot and demonstrate the tool's long-term value across government agencies.