Pongpanoch Chongpatiyutt

Hi, I'm Pongpanoch Chongpatiyutt.

AI Engineer

I build practical AI systems for real teams: ingest, retrieve, evaluate, and ship with confidence.

I'm an AI Engineer focused on reliable retrieval and practical AI agent systems.

I started in machine learning through university coursework, then moved deeper into the engineering side: ingestion quality, retrieval grounding, and stable behavior under real operating constraints.

At Saifa AI, I worked on production-oriented RAG and worker architecture, including PDF and web ingestion, Qdrant indexing, event-driven processing, and evaluation workflows for repeatability. I also contributed to multi-agent reliability improvements with safer fallbacks, clearer routing, and better replay and debug workflows.

I enjoy building end-to-end AI workflows that teams can operate confidently, from data preparation and retrieval to response generation and regression checks.

Most company projects are NDA-bound, so I share architecture patterns, tooling, and quality safeguards rather than confidential product details.

Experience

AI Engineering Intern, Saifa AI (Sep 2025 - Present)

  • Built and improved RAG pipelines for PDF and website ingestion, including extraction, chunking, metadata design, and Qdrant indexing and filtering.
  • Implemented repeatable evaluation workflows with auditable artifacts (JSONL logs, replay payloads, extraction and retrieval comparisons) for regression checks.
  • Stabilized the core event-driven worker flow (message in to reply out) in a RabbitMQ and FastAPI architecture with defensive fallback handling.
  • Improved reliability through payload validation, idempotent handling for duplicate events, structured logging, and clearer failure classification.

Tooling

PythonFastAPIRabbitMQQdrantLangChainOpenAI APIAnthropic APIGemini APIRAGVector SearchJSONL Evaluation LogsPrompt EngineeringDockerGit

Student Assistant, TU Berlin - Faculty V (Aug 2023 - May 2025)

  • Managed CNC-based fabrication workflows for open-source hardware projects, from CAD and CAM preparation to machine operation.
  • Supported machine setup and handoff quality checks to keep fabrication runs repeatable for student and research teams.
  • Helped build and operate makerspace infrastructure for student and research projects.
  • Produced practical documentation and structured handoffs to support reproducible technical work.

Tooling

Fusion 360CAD/CAMCNC ProgrammingWorkshop Tooling3D PrintingTechnical Documentation

Project: Reusable RAG Delivery Foundation

When teams start a new AI agent, they often spend days rebuilding the same RAG plumbing. I designed and implemented this reusable baseline to ingest sources, retrieve evidence, and answer with citations.

For each new use case, I can focus on what actually matters: fit to domain data, retrieval/prompt tuning, and edge-case validation on real documents.

In my delivery workflow, this typically saves about 20-40 engineering setup hours per use case before the first reliable stakeholder demo.

Problem

New agent initiatives frequently stall at stack selection, ingestion reliability, and retrieval quality setup.

System

A production-style RAG foundation with PDF/URL ingestion, vector indexing, grounded chat, and citation traces.

Benefit

Teams can move from concept to domain testing faster, with less regression risk and less rework.

Core Stack

Next.jsTypeScriptOpenAI APIPineconeBrowserlesspuppeteer-corepdf-parseReact MarkdownTailwind CSS

System Design

System design: a single Next.js app exposes ingest/chat APIs, chunks and indexes sources into Pinecone, and answers with retrieval-grounded context plus citations.

View System Design

Pong AI Demo

Upload PDF or ingest URL, then chat with grounded responses.

Status

Idle
Uploading
Scraping
Ingesting
Ready

Active Sources

No sources ingested yet.

Pong Assistant
Idle

Ask about uploaded files and URL content once ingestion is ready.

Portfolio demo: verify important details before making decisions.

Companion Tools I Built

The demo is the baseline layer. These companion tools make RAG quality measurable, tunable, and repeatable across different business use cases.

RAG Evaluation Harness

  • Runs golden-set queries with expected evidence and explicit pass/fail criteria.
  • Tracks citation coverage, groundedness, hit-rate, latency, and per-query cost across versions.

Outcome: Prevents silent quality regressions before deployment.

Stack

PythonJSONL TracesPandasJupyter NotebooksRegression Baselines

Chunking and Retrieval Comparator

  • Compares chunk size/overlap, top-k, thresholds, and reranking strategies on the same corpus.
  • Produces side-by-side retrieval and answer quality reports to guide tuning decisions.

Outcome: Turns retrieval tuning from guesswork into evidence-driven iteration.

Stack

LangChainRecursiveCharacterTextSplitterOpenAI EmbeddingsPinecone NamespacesRetrieval Parameter SweepsPandasJupyter Notebooks

Edge-Case Replay and Failure Analyzer

  • Replays problematic queries and classifies misses: no-evidence, wrong-source, and partial-answer.
  • Links each failure to trace artifacts so prompt, retrieval, and ingestion fixes are faster.

Outcome: Cuts debugging time and improves reliability on real user edge cases.

Stack

FastAPIRabbitMQStructured LoggingReplay PayloadsGit

Contact

For AI/ML/Software opportunities or project collaboration, feel free to reach out.

p.chongpatiyutt@gmail.com
LinkedInGitHub