Monce × Saft America
WHY US, WHY NOW

From 4,000
to 7,000 orders
without hiring.

Proposal for the Valdosta DCOM initiative — follow-up to the 23 April call with Mathieu Hélin and Stéphane Largeau.

Prepared by Monce AI · April 2026 · Live at saft.aws.monce.ai

01 MONCE × SAFT VALDOSTA
Context · Valdosta DCOM

DCOM is working. The ceiling isn't.

Your April KPI deck shows the DCOM project hitting — and beating — every target: lead time below 48 h (target 3 days), 97 % field extraction (target 95 %), 100 % archiving, all on 1,800 orders. The quantitative bar is already passed. The question is scale.

< 48h
Lead time
target: 3 days
97 %
Fields extracted
target: 95 %
100 %
Digital archiving
1,800+ orders
4,000 → 7,000
Orders / year
the jump to close
90 % of orders are processed by 1 CSR. — Saft Valdosta DCOM KPI deck, April 2026

That is a single point of failure between you and 7,000 orders. The next 3,000 orders don't need better extraction — they need a matcher that scales to every customer layout without re-tuning, and a review workflow that keeps the CSR on exceptions only.

02
Diagnostic

The real friction isn't extraction.

A 97 % field-extraction number is a ceiling for known layouts. Growth from 4k → 7k orders is not more Verizon volume — it's new aviation distributors, new rail customers, new military primes, each with a different PO format and its own manufacturer-PN conventions.

Friction 1

Per-layout tuning

Every new customer means a new template. Extraction accuracy on unseen layouts drops the day the CSR has to teach the system.

Friction 2

Manufacturer PN ↔ Saft SKU

Verizon orders 80-94890-02. Your ERP knows it as a different SKU. The cross-reference lives in a CSR's head — and in QAD comments — not in the matcher.

Friction 3

Long-tail master data

9,425 SKUs in OM-Material. Most match 3 % of volume. A matcher that over-fits on the hot 2 % collapses on the long tail — and the long tail is where growth comes from.

Scaling means a matcher that doesn't need per-customer engineering. It needs to be deterministic, auditable, and trained in minutes when the master data changes.

03
Proof · Live today

We built it. It's live.

Between the 23 April call and this deck: full pipeline running on the Valdosta master data you sent us. One URL, end-to-end.

Verizon PO
Stage 0/1
regex + Haiku
Stage 2
Sonnet 4.6
Stage 4
Snake SAT
JSON
+ trust score
9,425
SKUs ingested
(OM-Material)
42.8 s
End-to-end
on real Verizon PO
100 %
Snake match
on 80-94890-02
$ 0.05
LLM cost
per PO

✓ Regex tier 5 ms

Verizon Ariba PO 3002630800VERIZON + layout ariba_sap + SAFT confirmed, confidence 0.92. Zero LLM call.

✓ Snake 9,425 classes

500 MB SAT classifier trained in < 3 min, inference < 10 ms/query, no GPU. Auditable by line.

✓ 13/13 tests green

Catalog integrity, stage 0/1 regex, trained Snake, live HTTPS smoke — run on every deploy.

🔗 saft.aws.monce.ai · /ui drag-and-drop upload · /snake matching playground · /paper, /architecture, /economics.

04
Moat · The matching problem is solved

Snake is not another LLM.

Your 9,425 SKUs don't need embeddings or a fine-tuned model. They need a polynomial-time SAT classifier that proves its answer by construction. That's exactly what Snake does — and it was designed for this.

The Dana Theorem (2024)

Any indicator function over a finite discrete domain can be encoded as a SAT instance in polynomial time. Decision-tree bucketing reduces it to linear in the sample count.

Charles Dana's thesis (Ecole Polytechnique, supervised by E. Le Pennec). Independently validated against XGBoost / RF / DL on an NIH-funded dataset — published in a Springer-accepted paper on mitochondrial classification (2025).

What it buys Saft

  • Deterministic. Same PO → same answer. Always.
  • Auditable. Every prediction carries the triggered SAT clauses — CSR can read why.
  • No GPU. t3.medium, 1 GB RAM, pure Python.
  • Retrain in minutes when OM-Material changes.
  • Linear scaling in master-data size.

vs LLM-only

Non-deterministic. Hallucinates SKUs. Costs $/query. Can't explain the answer.

vs RAG + embeddings

Needs vector DB + re-indexing. Silent failure on long-tail SKUs. No audit trail.

vs rule engine

Brittle on new layouts. Per-customer ops cost. That's the DCOM ceiling today.

05
Team

Three people. Built for this.

MH
Mathieu Hélin
CEO · Monce

INSEAD. Building the future of industrial commerce. 6,800 LinkedIn followers, deep network across EU industrial accounts (TotalEnergies, aerospace primes, automotive tier-1s). Owns the commercial relationship.

DZ
Dasha Zuyeva
Building Monce

FH Oberösterreich. AI-native industrial commerce operator. Customer-facing delivery — pilot scoping, master-data onboarding, CSR workflow integration. The person Saft CSRs will actually work with day-to-day.

CD
Charles Dana
AI / ML · Monce

X-HEC Entrepreneurs. Author of Snake and the Dana Theorem (2024). Published in Springer (2025, NIH-funded). Making AI trustworthy · CPU-based AI. Owns the pipeline end-to-end. Shipped this deck's live demo in 4 hours.

Why it matters for Saft: the engineer who wrote the matching theorem is the engineer who will wire it into your ERP. No handoff, no "it works on the data scientist's laptop." Direct line from research to production.

06
Economics

At 7,000 orders / year: > $ 60k saved.

Per-PO costMonce pipelineManual entry
LLM (Haiku + Sonnet + Haiku)$ 0.05
Matcher (Snake, in-process)$ 0.00
CSR touch time30 s spot-check8–15 min data entry
Loaded CSR cost$ 0.38$ 9.00
Per PO all-in$ 0.43$ 9.00
$ 60k+
Annual savings
at 7,000 POs/year
$ 36 / mo
Fixed infra
(EC2 + DNS + TLS)
0
GPU
required

Three model_mode knobs let Saft trade cost for accuracy per customer: cheap ($0.01), balanced ($0.05, default), accurate ($0.12). Expensive models fire only when Snake's top-1 confidence falls below θauto.

07
Pilot proposal · 60 days

A clean 60-day pilot. Measurable exit criteria.

Phase 1 · Weeks 1–3

Master-data onboarding

  • Live OM-Material ingestion pipeline (weekly or on-change)
  • Manufacturer-PN ↔ Saft-SKU cross-reference bootstrapped from 6 months of historical PO receipts
  • Customer catalog expanded beyond the 20 seed accounts
  • Re-train Snake → deploy → 13-test suite green on Valdosta data

Phase 2 · Weeks 4–8

Shadow + A/B against DCOM

  • Ingest every Valdosta PO through both pipelines in parallel
  • Zero risk to production: DCOM remains source of truth
  • Daily field-level diff report (Monce vs DCOM)
  • Weekly review with Stéphane + 1 Valdosta CSR

Exit criteria · measurable, non-negotiable

≥ 97 %
Field match
vs DCOM
≥ 70 %
Auto-approved
no CSR touch
< 30 s
Avg CSR
touch time
100 %
Long-tail SKUs
audit-passable

Hit all four and we extend — Saft pays for production. Miss any and Monce walks, no lock-in, Saft keeps every artifact produced.

08
Asks · Before we can close this

Seven questions. Answer them, we ship.

1. How is the 97 % measured?

Per-field exact match, per-line ok/ko, or CSR-accepted-as-is? Determines our pilot comparator.

2. What blocks the 4k → 7k jump?

CSR headcount, new layouts, error recovery, master-data drift? Defines phase-2 scope.

3. Master-data refresh cadence?

Daily batch, weekly CSV, change-feed? We build the ingestion to match.

4. Where does mfr-PN ↔ Saft-SKU live today?

QAD comments, CSR head, XLS sidecar? That's the bootstrap source for Snake aliases.

5. Integration target?

Direct QAD EE API, S3/SFTP queue, or existing DCOM post-processing hook? All three buildable, pick one.

6. Data-residency constraints?

Bedrock eu-west-3 default. US-region or SageMaker/on-prem if TotalEnergies policy requires.

7. Scope beyond Valdosta?

Poitiers CSE was mentioned. Single site first, or platform play from day 1?

+ 0. The Fireflies transcript

Stéphane — can you forward it? Keeps us aligned on what was actually agreed on the call.

Full list: saft.aws.monce.aiQUESTIONS.md in the repo (17 questions).

09
Next step

One call this week.
A pilot start by 15 May.

We built the proof in four hours on the dataset you sent. Give us 60 days on the Valdosta feed and we meet the 7k ambition — or walk.

Live demo

saft.aws.monce.ai/ui — drag a Valdosta PO, see the full pipeline.

Technical deep-dive

/paper, /architecture, /economics.

Repo + tests

Monce-AI/saft.aws.monce.ai — private, 13/13 green.

Contact

Mathieu Hélin · CEO, Monce · mathieu@monce.ai
Charles Dana · AI/ML · charles@monce.ai

10 MERCI · MONCE × SAFT VALDOSTA · APRIL 2026