Concept demo · Updated 19 April 2026

Social Policy Pulse — a roadmap written in plain English.

This page exists so Sarah (and anyone else) can see what we're building, why, and how far along we are — without needing to know what AWS, Lambda, or Terraform are. We'll update it as the project moves forward.

What is Social Policy Pulse?

Social Policy Pulse is a research tool for people who study social policy in the Middle East and North Africa. It takes policy reports from a curated list of trusted sources (like the World Bank, the International Labour Organization, and UN ESCWA), asks an AI to read each one and write a plain-English summary, and then lets researchers search, filter, and browse all those summaries in one place.

The goal is to save researchers days of hunting through dozens of different websites every time they need to understand what the evidence says about, for example, cash-transfer programmes for refugees in Jordan.

What exists right now — and what you can click today

Today we have a concept demo live on the internet. It's not a finished product — it's a working prototype that shows how the real thing will feel. The concept demo has:

What the concept demo doesn't yet have: real user accounts, payments, the full 200–300 document library, the 13 automated daily scrapers, Arabic language support, or production-grade security. Those all come after the concept — and the rest of this page walks through exactly when.

How we're tracking the work

Done — shipped in the concept demo
In progress — actively being built
Planned — scheduled, not yet started

The work is split into seven milestones. The concept demo covers pieces from several of them. After the concept, we move into a more structured build that finishes each milestone one at a time.

The milestones

M1 — The Foundations Part done via concept

This is the plumbing. The server set-up, the database, a safe way for new code to go live without anyone logging in to copy files, and cost alarms so the monthly bill never surprises Sarah.

Cloud account set up in Frankfurt (the closest big server region for European and MENA users, and in a country with strict data-privacy law).
Database stood up for free under a 12-month trial tier.
The entire server set-up is now written as code — we can tear it down and rebuild it from scratch in 15 minutes if anything goes wrong.
A working web-service endpoint that the app can talk to from any browser.
Cost alarms: email Sarah if spend passes $10/month, hard-stop the spend at $50.
Separate "practice" / "rehearsal" / "live" environments so we can test big changes without breaking anything real.
Automatic deploy: when an engineer saves a change, the live site picks it up within a few minutes with no manual steps.
M2 — The First Real Document Part done via concept

Here we prove the hardest piece works: paste a URL, the system fetches the page, extracts the text, asks an AI to summarise and classify it, and drops the result into a moderation queue for a human to approve.

The full AI pipeline is working: fetch the page → extract the text → summarise it → classify it → queue it for Sarah to approve.
Tested live against real pages from the World Bank, ILO, and UN ESCWA — not just made-up examples.
Every AI run logs how much it cost. Total AI spend so far, across the whole build: under a penny.
The five-dimension tagging system (outcomes, policy domains, instruments, population groups, countries) is final and already filled in with the right options.
A working moderation screen where Sarah approves or rejects pending documents with one click.
Keyword search across titles, summaries, and tags — fast, free, good enough for libraries up to 10,000 documents.
Upgrade to a newer, higher-quality AI model (we're on an older one for now because of an account permission we can fix later).
Rate limits to stop anyone hammering the site with thousands of requests a minute.
An automatic "parking lot" for ingestion jobs that fail, so nothing gets silently lost.
M3 — Content Operations Part done via concept

The tools Sarah and her editorial team need to actually run the platform day-to-day: creating Case Cards, writing Event Cards, reviewing the queue at scale.

Case Card display in the public app (12 cards seeded; target is 50+ for launch).
Event Card display in the public app (10 cards seeded: Lebanon’s collapse, Sudan conflict, Morocco earthquake, MENA wheat shock, Gaza war, Yemen, Syria, Jordan CPF, GCC kafala reforms, Arab poverty surge).
Admin UI to create and edit Case Cards.
Admin UI to create and edit Event Cards.
Separate permissions for "Admin" (Sarah) vs. "Editor" (the two reviewers).
An admin page to inspect and retry failed ingestion jobs.
M4 — Search & Discovery Part done via concept

How visitors actually find what they need: keyword search, filter by country, by policy outcome, by domain. A homepage that shows the latest approved documents.

Homepage with latest documents, highlight case cards, and running stats.
Browse page with keyword search and three-dimension filters.
Document detail page with the full AI summary, all taxonomy tags, and a proper source citation.
"AI Brief" synthesis tab that writes a short answer across multiple matching documents.
"Case Studies" tab pulling in linked case cards for a search query.
M5 — Accounts & the Paywall Planned (post-concept)

In the concept demo, you pick a role with a single button click — there are no real accounts. After the concept, we build the real thing: users can register, sign in with Google, save items to a personal workspace, write policy briefs with proper citations, and export them as Word or BibTeX. Free users hit the 15-item monthly paywall; paid users don't.

Demo-only paywall: the Free tier role shows the 15-item paywall experience during the concept.
Real user registration and sign-in (Google at minimum).
"My Workspace" — save documents, case cards, and event cards to your own folder.
Policy-brief editor with inline citations and Word / BibTeX / RIS export.
English + Arabic interface, fully right-to-left.
"Delete my account and everything I saved" flow (GDPR compliance).
Paddle payments (Sarah has no registered company — Paddle handles tax and invoicing on her behalf).
M6 — The Full Library Planned

Today the library has ~60 sample documents. The real launch needs 200–300. This milestone builds the automatic scrapers that watch each of the 13 priority sources every few hours and queue up anything new.

Individual scrapers for all 13 priority sources.
Bulk backfill run — populate 200–300 documents across all sources retroactively.
Public form for visitors to suggest new sources; editor approves or rejects with reason.
M7 — Ready for the Public Planned

The last layer of polish before a public launch: cookie banner, Terms of Service, performance tuning, weekly email digests, the founding-beta-user programme, observability dashboards, and a disaster-recovery test.

GDPR cookie consent banner + Privacy Policy + Terms of Service.
Performance and caching so search feels instant.
Observability dashboards and alerting (if something breaks, the team knows within seconds).
Static country and topic pages for SEO (so Google can surface "Morocco social protection" directly to policy researchers).
10–15 hand-picked founding-beta users with lifetime free Pro.
Weekly email digest of new approved content.
Automatic security scanning (Dependabot) and a disaster-recovery drill.

What it costs to run, honestly

For the first 12 months, Amazon gives new accounts a free trial, so the concept demo costs roughly $0–3 per month. AI costs are trivial at this scale (under a penny for the whole concept build; about one-fifth of a cent per document). After the free trial ends (April 2027), the baseline becomes roughly $15–20 per month, almost all of it the database. We have a scheduled reminder to review this one month before the trial ends so we can choose to downsize, migrate, or absorb the cost deliberately.

Technology choices, in one line each

This section is for the curious. You can skip it without missing anything.