Data Analyst Agent
CodeAct for Marketing Analytics

An AI agent that writes and executes Python code to answer marketing questions. Metrics are computed, not hallucinated.
Overview
A Chainlit-based agent that demonstrates the CodeAct pattern: instead of generating text explanations, the agent writes Python code that computes real metrics against real data. Ask a marketing question, get executable analysis.
Challenge
LLMs hallucinate numbers. Ask "what was our CTR last month?" and you get a plausible but fabricated answer. For business analytics, this is worse than useless. The agent needs to compute, not confabulate.
Approach
Implemented the CodeAct pattern: the agent generates Python code (pandas, matplotlib) that runs against actual campaign data. Results come from computation, not generation.
Built four prompt versions (v1-v4) to test how system prompt design affects output quality. Each version adds structure: metric definitions, analysis patterns, professional formatting.
Created analysis modes (Quick, Deep, Executive) that adjust verbosity and detail level. Quick gives numbers; Executive adds context and recommendations.
Added a CLI agent for rapid testing and prompt iteration without the UI overhead.
Outcome
The CodeAct approach eliminated hallucinated metrics entirely. Prompt version experiments showed that structured system prompts (v3, v4) produced more reliable code with fewer runtime errors. The project became a reference for grounded AI in business contexts.