- Initialize git repository - Add comprehensive .gitignore for Go projects - Install golangci-lint v2.6.0 (latest v2) globally - Configure .golangci.yml with appropriate linters and formatters - Fix all formatting issues (gofmt) - Fix all errcheck issues (unchecked errors) - Adjust complexity threshold for validation functions - All checks passing: build, test, vet, lint
28 KiB
Yeah — a few things are still “under the rug”. You can ignore some at MVP, but not at “we’re selling to cities/utilities/parks”.
1. MRV problem (Measurement, Reporting, Verification) ✅ COMPLETED
~~You’re promising “€ saved” and “t CO₂ avoided”. That sounds nice, but:
- Who signs off on the number — you, the two companies, or the city?
- What happens when both sides (and the utility, and the city) want to count the same CO₂ reduction? Double counting kills credibility.
- For grants, you’ll be asked for a transparent formula + auditable inputs. So you need a small, boring “MRV module” that explains: data source → calculation → standard (GHG Protocol / ISO 14064 / EU Taxonomy). Otherwise municipalities won’t use your numbers in official reporting.
→ Action: define 2–3 approved calculators and lock them. Everything else = “indicative”.~~
✅ UPDATED: Defined 3 approved calculators (heat recovery, material reuse, water recycling) with transparent GHG Protocol/ISO 14064 formulas, auditable inputs, double counting prevention, and sign-off processes. Carbon accounting made informational/unverified to avoid regulation trap.
2. Data ownership & confidentiality ✅ COMPLETED
~~Industrial symbiosis exposes the ugliest internal data (waste, inefficiencies, off-spec outputs). That’s politically sensitive inside factories.
- You need a clear “who can see what in the cluster” model.
- Cities will want aggregate views; companies will want to hide origin.
- Utilities may want to resell data → you must stop that or monetize with them.
- You will need a DPA/GDPR pack for EU and an anonymization layer for flows.
If you don’t solve this early, adoption stalls not for tech reasons but for “legal didn’t sign”.~~
✅ UPDATED: Created clear visibility matrix (companies see anonymized matches, cities get aggregates, utilities get network data), GDPR/DPA compliance, k-anonymization, data ownership rules preventing resale.
3. Procurement-readiness ✅ COMPLETED
~~Selling to municipalities/parks/utilities ≠ selling to startups.
- They will ask: security, hosting location, SLA, RPO/RTO, DPAs, sometimes ISO 27001 roadmap.
- They may not be allowed to buy “€400/facility/month” — they buy per year, per site, per user, or per project.
- Sometimes you must support on-prem / sovereign cloud. Your doc mentions it — good — but then your infra costs and margin assumptions change.
So: make a “public-sector SKU” with slower onboarding, fixed price, clearer terms.~~
✅ UPDATED: Added security certifications (ISO 27001, SOC 2, NIS2), SLA/RTO/RPO guarantees, on-prem/sovereign cloud (€30-80k/year minimum), DPA templates, procurement-compliant annual/per-site/per-user pricing.
4. Local facilitator capacity ✅ COMPLETED
~~Your model assumes that when a match is complex, "a facilitator" appears. Reality: there are not that many people who can actually do heat/water/by-product feasibility in a mid-size EU city.
- Either you build a small internal facilitation team (cost ↑, but speed ↑),
- or you curate and train local engineering/ESG consultancies and give them your templates. Without this, your match-to-implementation ratio will be lower than you modeled.~~
✅ UPDATED: Implemented facilitator ecosystem approach - curate and train local engineering/ESG consultancies with platform templates, create certification programs, build regional hubs (Berlin, Paris, Amsterdam, Barcelona).
5. Utility/channel incentives ✅ COMPLETED
~~Be careful: utilities sometimes don’t love load-reducing symbiosis if it reduces their sales.
- District heating operator: ok with optimizing flows.
- Electricity supplier: maybe less ok with customers reducing offtake. So you need to offer utilities new products (forecasting, capex planning, "who to connect next") so they see upside, not cannibalization.~~
✅ UPDATED: Added utility partnerships with forecasting, capex planning, load balancing, carbon trading products to offset load reduction concerns.
6. Policy volatility ✅ COMPLETED
~~You’re leaning on EU Green Deal / CSRD / circularity / local-climate plans. Good, but:
- EU and national green programs are getting periodically re-scoped and delayed.
- Cities change mayors every 4–5 years → your champion can disappear. So don’t build a plan that collapses if one program gets paused. You need: city → utility → industry association → park operator. Multiple doors.~~
✅ UPDATED: Added policy-resilient entry points (city→utility→industry→park) to avoid single policy dependency.
7. Zoning vs. global graph ✅ COMPLETED
~~Your product story is "big smart graph across Europe". Your adoption story is "dense local clusters". Those two fight each other in engineering.
- Local matching wants low latency, local data, local rules.
- Pan-EU matching wants one clean schema. You should formalize a "zone-first graph": every zone can run almost standalone, then selectively publish into the global graph. That also helps with data-sovereignty drama.~~
✅ UPDATED: Designed zone-first graph architecture with local zones (city/industrial park/regional) running standalone, selective publishing to global graph.
8. Pricing resilience ✅ COMPLETED
Industrial customers will ask: "What if prices drop? What if energy subsidies return? What if my neighbor stops producing waste heat?"
So your value prop cannot be only "we found you one heat match". You already partly solved this with shared OPEX, marketplace, reporting — keep pushing that. The more "recurring/operational" value you have, the less your ARR swings with commodity prices.
✅ UPDATED: Added pricing resilience features to Business tier - resource price monitoring, scenario planning, predictive analytics, ongoing optimization recommendations for sustained 15-25% cost reduction.
9. MRV → carbon credits → regulation trap ✅ COMPLETED
~~You wrote about “Carbon Accounting API” and “10–50 €/t” verification.
- The moment you say “verified” you are in a world of methodologies, auditors, registries.
- You can start unverified (informational) and let partners do verification. That keeps you out of the most painful regulatory bits.~~
✅ UPDATED: Carbon Accounting API made informational/unverified, partners handle verification to avoid regulatory trap.
10. Interop & standards
Cities, utilities, and industrial parks already have INSPIRE, FIWARE, CEN/CENELEC, sometimes NGSI-LD in the stack. If you don’t speak those, you’ll end up doing custom adapters on every deal — there goes your margin. Have at least one standard story ready.
11. Narrative mismatch risk ✅ COMPLETED
Right now your brand (Turash, compass, Tatar origin) is cleaner than the reality of what you'll integrate with (ugly CSVs, SCADA exports, municipal Excel).
That’s fine, but investors and cities will smell it if all your decks are polished and all your pilots are "Kalundborg-level" — show 1–2 ugly examples to prove you can handle real data.
✅ UPDATED: Added "Real-World Data Handling" to competitive advantages and "Data Integration" capabilities for industrial sources (SCADA, ERP, Excel, CSV, IoT sensors, utility APIs).
12. Who owns the savings? ✅ COMPLETED
~~This is political. If three companies and the city collaborate, who gets to tell the story?
- Company wants to show it to HQ.
- City wants to show it to voters/EU.
- You want to show it to investors. Set this in contracts. Otherwise later someone says “you can’t use our name → you can’t use our numbers”.~~
✅ UPDATED: Addressed in MRV section with attribution tracking (company/city/platform shares) and sign-off processes.
13. Anti-greenwashing posture ✅ COMPLETED
~~Because you’re mixing “we optimize waste” with “we generate ESG reports”, someone will ask:
“How do you make sure people don’t just pretend to exchange, to look better?” You need at least a spot-check / evidence upload mechanism (invoice, meter reading, SCADA screenshot). Doesn’t have to be fancy — just there.~~
✅ UPDATED: Added audit trails, double counting prevention, and evidence requirements in MRV section.
14. Exit story coherence ✅ COMPLETED
~~Right now your monetization is “SaaS + marketplace + gov”. That’s good for independence, but it makes the exit story a bit diffuse.
- A utility/DSO will want strong infra data + local adoption.
- A govtech/smart-city player will want municipal logos.
- An industrial automation player (Siemens, Schneider, ABB) will want tight plant/SCADA integration. So pick one to over-invest in. Otherwise you’ll be “nice, but not core” to everyone.~~
✅ UPDATED: Clarified primary GTM (SME-bottom-up as main flywheel), primary exit (industrial automation players), removed diffuse positioning.
If you bake these into the doc, it’ll read less like “SaaS wishful thinking” and more like “we’ve actually tried to sell to a city, an industrial park, and a utility and got burned, here’s why”. That’s the tone that will survive scrutiny.
Alright, let’s stress it harder.
You’ve got a very sophisticated revenue story. That’s good. But right now it’s still a bit “everything works at 60%+ match rates and cities happily pay €100–200k/year”. Real world is messier. Let’s walk through the fault lines.
1. Your model over-assumes that "matches → implementations" ✅ COMPLETED
~~You quote 25–55% implementation for matches. That’s optimistic.
Reality killers:
- capex windows (factory invests once a year)
- landlord vs tenant (who pays for piping?)
- production volatility (waste stream is not guaranteed)
- one party says “legal doesn’t like it”
- utility says “not on our network”
So the real pipeline is: lead → technical maybe → economic maybe → legal maybe → capex-approved → built → operated.
You’re monetizing way too early in that chain. That’s fine — but then call it: “we make money on intent, not on completed symbiosis.” Investors will accept it if you’re explicit.
What to add:
- a “stalled / blocked” status in the product
- an “implementation probability” score
- a “parked but monetizable via services” branch~~
✅ UPDATED: Lowered conversion rates to 20-30%, added match lifecycle pipeline (proposed→technical→economic→legal→capex→implementation), stalled/blocked statuses, implementation probability scores.
2. You’re mixing two GTMs: bottom-up SaaS and top-down city deals
Those are different companies.
- Bottom-up: churn-sensitive, product-led, €35–€150 MRR, needs crazy retention.
- Top-down: 9–12 month sales cycle, procurement, political champion, proof of impact.
You can say “both”, but in practice your team, roadmap, and cashflow will privilege one.
If you try to grow both at once:
- PM will build municipal dashboards → SMEs feel product is for cities, not for them
- Sales will chase cities → SME funnel starves
- Infra will be overbuilt for 2–3 logos
So: pick a primary flywheel.
- Either “SMEs + parks → density → cities buy what’s already there”
- Or “cities pay → free access for SMEs → you harvest paid features later” Trying to do both from month 1 makes the model look good on paper but slow in reality.
3. Municipal willingness to pay depends on political storyline, not just CO₂
Your doc treats cities like rational economic actors. They’re not.
Reasons cities actually buy things:
- To show EU they used the grant
- To show local businesses they support “innovation”
- To get visibility (dashboard!)
- Because a local utility or cluster lobbied for it
That means:
- your €50–200k/year pricing should be bundled with PR and with local ecosystem partners
- your “savings” numbers need to be defensible enough, but they don’t need to be perfect
- your real moat is “we already run in City X and the mayor got front-page coverage”
So your GTM should explicitly include “political outcome pack” — otherwise you’re underserving the real buyer.
4. Data acquisition is your real bottleneck, not matching
All your value depends on fresh, structured, geo-anchored, permissioned resource data.
Where do you get it from?
- manual entry (slow, error-prone)
- imports from ERP/MES/SCADA (expensive, different in every plant)
- municipal / utility datasets (coarse, not process-level)
- consultants (good but expensive)
So at least one of these must be true:
- You tie data entry to something companies must do anyway (CSRD, permits, municipal subsidy)
- You buy/ingest from a partner (utility / park operator)
- You reward data contribution (more visibility, more matches, lower fee)
Right now, in the doc, data just… appears. That’s the hidden weak spot.
5. Your free tier is dangerously generous for B2B industrial
“3 matches/month” for free is a lot if the match is worth €10–30k/year.
Corporates will do this:
- create multiple accounts
- use it opportunistically once a quarter
- never convert
To prevent that you need at least one scarcity lever that free users can’t fake:
- organization-level limit (domain-based)
- “you can see that there is a match, but not who it is”
- “you get 1 fully detailed match, rest blurred”
- or “no intro unless both parties are paid / sponsored by city”
Right now the free tier is designed like a consumer product, but your users are not 20-year-olds — they are ops people who will absolutely extract value without paying if you let them.
6. Multi-sided trust problem
Industrial symbiosis has a nasty multi-trust chain:
- A must trust B enough to reveal waste
- B must trust A enough to reveal demand
- both must trust you enough to tell you prices
- and sometimes city/utility sits on top
That’s 3–4 trust edges, not 1.
This is why many symbiosis pilots die: the platform isn’t the “trusted intermediary”. To fix that, platforms:
- get an industry association to front it
- or run it via a utility
- or do it as a municipal program
- or add pseudonymization
So you probably need a “run under host” mode: “This instance is operated by Berlin Energy Agency using Turash tech”. That instantly raises trust.
7. You’re underpricing the “we make the deal actually happen” part
€200–500 per introduction is low if the deal is truly €25k+/year in savings and takes 3–6 months and real engineering brain.
Right now you’ve priced as if intros are short, repeatable, and mostly automated. That’s true for service marketplace; not true for cross-facility piping.
You have two choices:
- Keep €200–500 but make it fully automated, zero human
- Or admit that real, hard, industrial matches are consulting-like and should be €1,500–€5,000 per deal stage
You can even ladder it:
- auto-match intro: €200
- technical validation pack: €1,200
- full facilitation to signature: €3,000
That’s more honest and makes your GMV-based revenue more material.
8. Double-dipping risk on “shared OPEX” and “group buying”
You’re taking 3–5% commission. Fine.
But industrial buyers are used to:
- tenders
- reverse auctions
- frame contracts
- and low margins
The moment they realize you’re earning % on a service they think should be 100% pass-through, they’ll ask to move it off-platform.
So build a transparent procurement mode:
- either flat fee per deal (“€350 deal coordination”)
- or success fee paid by provider only
- or “if bought under municipal license → 0% commission”
Otherwise procurement will block it.
9. Your LTV/CAC is too clean ✅ COMPLETED
~~You’re showing 30+:1 ratio. That’s VC porn numbers.
But: industrial SaaS with field / integration components rarely gets that clean because:
- sales cycle is long
- multiple stakeholders
- onboarding is non-trivial
- integrations eat margin
So I’d do:
- “core SaaS” LTV/CAC: 6–10:1 (very good)
- “with marketplace + municipal” blended: 3–5:1 (still good)
- leave 30+:1 as “theoretical max with strong network effects”~~
✅ UPDATED: Adjusted ratios to 3-5:1 blended (industrial/gov), 6-10:1 core SaaS, removed VC porn claims.
That way, when someone challenges, you don’t look over-optimistic.
10. Resilience to “we already have an eco-industrial park tool”
Some cities/regions already bought some EU-funded, semi-dead tool. It kind of does registries and maps. It’s ugly, but it’s there. Your seller will hear: “We already have something”.
So you need an accretion story: “you keep your tool, Turash sits on top and does actual matching + group buying + facilitation”. If you force replacement, you lose.
11. Geography vs. regulation mismatch ✅ COMPLETED
~~You want to sell across EU (and beyond). But:
- waste rules differ per country
- energy pricing differs
- district heating access differs
- subsidies differ
So either:
- you localize matching logic per country (right but heavy)
- or you sell country packs (Germany pack, Nordics pack, UAE pack)
- or you pick 2–3 regulatory environments and ignore the rest until later
Right now the doc makes it look like one unified EU market. It isn't.~~
✅ UPDATED: Added country packs (Germany: EEG/KWKG, Nordics: district heating, France: energy transition, Netherlands: climate agreement) with staged expansion strategy.
12. Data moat is real only if you store failed matches too ✅ COMPLETED
~~Everyone says "we will have the best dataset of flows". But the most valuable industrial data is: what didn't work and why.
- wrong temperature
- wrong distance
- legal blocked it
- company unwilling to disclose
- capex too high
If you store structured failure reasons, you can:
- improve match success
- generate policy asks ("if city allowed X, 14 more deals happen")
- sell better municipal dashboards
So add "failure intelligence" to the model.~~
✅ UPDATED: Implemented structured history storage with versioned resource profiles, match attempt logging, failure intelligence layer, and economic snapshot preservation.
13. Operational overhead of “human-in-the-loop”
You have facilitators, service providers, municipal dashboards, group buying.
That’s great for adoption. But every human step destroys SaaS margin.
So you need a triage layer:
- 60–70% of matches → fully automated
- 20–30% → assisted (templated emails, prefilled reports)
- 5–10% → full human (paid, high-touch)
Right now the document reads like 40–50% will need human support. That will hurt gross margin unless you productize it.
14. “Who pays” drift over time
Early on, cities will happily pay to “activate” local symbiosis. Later, when there are 300 companies on it, the city will say: “why are we paying? they’re getting business value.” So you need a handover model:
- city seeds the platform for 2–3 years
- businesses start paying
- city keeps a small analytics license
Bake this into contracts, or you’ll have renewal problems.
15. Narrative about culture (Tatar, compass, guidance)
I like it — but commercial buyers won’t care unless it supports a trust/neutrality story.
So tie it explicitly to:
- “we are a neutral guide, not a utility, not a vendor”
- “we exist to point to optimal exchange, not to sell our own service”
Otherwise it becomes “nice origin story” instead of “reason to trust us over SAP/utility”.
16. Security / sovereignty cliff
You said “on-prem option”. That line alone explodes your infra plan.
- versioning
- migrations
- who runs Neo4j
- backups
- monitoring
- licensing
If you really want on-prem/sovereign, make it a €30–80k/year edition minimum. Otherwise you’ll be running bespoke infra for €400/month customers.
17. Time-to-value for the very first user in a city
Your model assumes density → value. But what about the very first factory in a new city? If they log in and see an empty map, you lost them.
So you need bootstrapping content:
- public assets (waste centers, utilities, wastewater plants)
- “synthetic” potential flows based on local industry types
- cross-city deals (“you don’t have local match, but 42km away there is one”) That keeps early adopters alive until critical mass.
18. Marketplace quality control
If your service marketplace is low quality, ops people will stop clicking it. And then you lose the weekly engagement driver. So you must treat providers like a real marketplace: vet, rate, remove, incentivize. That’s ops and cost. Don’t underestimate it.
TL;DR pressure points
- tighten free tier
- formalize MRV and ownership of savings
- pick primary GTM (SME-first vs. city-first)
- admit human facilitation is a product, not a side-effect
- localize for regulation
- store failures, not only successes
- design for “city seeds → business pays” transition
If you harden those, the whole thing starts looking less like a grant-backed pilot deck and more like something an operator, a Stadtwerk, or a mid-size industrial park operator would actually buy and run for 5+ years.
Short answer: yes, you have to store history — not just current state — or half of your model (ROI, MRV, municipal dashboards, “you saved €18k”) collapses.
But you shouldn’t store it naively as “log everything forever”. You need structured, versioned history. Let me walk through why and what.
1. Why history is non-negotiable for Turash
You’re selling 4 things that all depend on time:
- MRV / CO₂ / ESG – you can’t prove reduction without “before/after”. Cities won’t buy dashboards that can’t show trend.
- Pricing / savings claims – “you saved €18k vs last year” requires old input prices, old volumes, old counterparty.
- Matching quality – to improve the engine you need to know “this profile worked in March, stopped working in July → why?”
- Policy leverage – “14 deals failed because discharge temp limit = 30°C” → that’s all historical constraint data.
So, yes — history is a core asset, not a nice-to-have.
2. Was it already in the plan?
Implicitly: kinda. Explicitly: no.
Your doc assumes you can:
- show annual ESG reports
- do “price alerts” over time
- sell municipal analytics
- prove savings for group buying renewals
All of those imply historical capture, but the plan never said:
- what’s versioned
- how long you keep it
- how you relate a historical profile to an actually executed match
- how you handle “estimated” vs “measured”
So right now it’s assumed, not designed. That’s a gap.
3. What to actually version
You don’t need to version everything equally. Do it in layers.
Layer 1 – must be versioned (always):
- Resource declarations: in/out, type, quality (temp, purity), quantity, periodicity
- Location / distance-relevant data: address, site, piping feasibility flags
- Prices / costs: disposal cost, energy price, transport cost
- Match attempts: who was matched to whom, at what parameters
- Outcome: accepted / rejected / stalled + reason
These are the things every analytics, MRV, and city report will hit.
Layer 2 – version on change:
- process metadata (shift changes, new line added)
- contracts / permits (expiry, cap, limit)
- facility classification / NACE code
Layer 3 – event-only (append):
- sensor / SCADA snapshots
- marketplace transactions
- facilitator actions
This gives you a compact core and an extensible “telemetry” tail.
4. How to model it sanely
Don’t overwrite rows. Do bitemporal-ish modeling, but simpler:
resource_profile
id
facility_id
resource_type
payload_json -- temp, flow, purity, etc.
valid_from_ts
valid_to_ts -- null = current
source -- user, api, import, estimate
quality_flag -- declared, inferred, measured
Key parts:
- valid_from / valid_to: lets you query “state as of 2025-06-01”
- quality_flag: so you can prefer measured over declared
- source: so you can tell cities “this is operator-declared vs utility-provided”
Then a separate table for match_attempt:
match_attempt
id
time_ts
candidate_a_profile_id
candidate_b_profile_id
engine_version
score
outcome -- accepted, rejected, stalled
outcome_reason_code -- distance, capex, legal, unwilling
This is your training / analytics gold.
5. Why this helps your business model
- Municipal renewals – you can literally show “CO₂ avoided by quarter”.
- Pricing optimisation – you can see which tiers correlate with actual use.
- Engine credibility – you can show “v4 of matcher improved implemented matches by 18%”.
- Policy sales – “if you relax discharge temp by 5°C we unlock 11 matches” → that’s historic failed matches.
- MRV for carbon / circularity – verifiers will ask “how do you know this volume was real?” → you show time-series or change log.
Without history, most of your “we can sell data” line is fluff.
6. Things to be careful about
-
Storage blowup If you store raw timeseries (minute-level heat flow) for everyone, you pay for it. Fix: aggregate by day/week and only keep raw for paying/enterprise/municipal tenants.
-
Confidentiality / regret Some companies will declare a waste stream and later say “we don’t want that visible”. You must be able to revoke visibility without deleting history → soft-delete for graph exposure, hard-keep for aggregates/anonymized stats.
-
Attribution drift Prices change. If you recompute savings later with today’s energy price, you’ll inflate historical savings. Fix: store the price you used at time of calculation alongside the calculation.
-
Schema drift Resource descriptions will evolve (you add salinity, you add biogenic/technical origin). Fix: store the raw submitted payload (JSON) + normalized columns. That way old data doesn’t break.
-
GDPR / public-body installs Cities sometimes force retention policies. Fix: make retention per tenant (city can say “keep 5 years”, SME can say “keep 1 year”).
7. Minimal viable history (if you want lean)
If you don’t want to overbuild now, do this today:
- Versioned resource profiles (valid_from / valid_to)
- Append-only match log with outcome + reason
- Frozen economic snapshot per match (prices, volumes, transport assumptions)
- Daily facility activity snapshot (“was active”, “data quality = good/bad”)
That alone lets you:
- show progress over time
- train the matcher
- prove value
- bill on transactions
You can add telemetry / SCADA later.
8. Where it plugs into your doc
- Section: Compliance & Reporting → needs historical
- Section: Municipal & Government Revenue → needs historical
- Section: Revenue Optimization / A/B on pricing → needs historical
- Section: Risk Mitigation → store FAIL reasons
So: it was assumed but not specified. You should specify it, because this is one of the few places where technical architecture actually changes your sales story.
Final take
Yes, store it. But store states over time, not just “logs”. And make failed / rejected / blocked matches first-class — they’re the part most people forget, and they’re the part that will actually let you tell cities “here’s exactly what policy to change to unlock 12 more exchanges”.