turash/docs/business/feedback.md
Damir Mukimov 000eab4740
Major repository reorganization and missing backend endpoints implementation
Repository Structure:
- Move files from cluttered root directory into organized structure
- Create archive/ for archived data and scraper results
- Create bugulma/ for the complete application (frontend + backend)
- Create data/ for sample datasets and reference materials
- Create docs/ for comprehensive documentation structure
- Create scripts/ for utility scripts and API tools

Backend Implementation:
- Implement 3 missing backend endpoints identified in gap analysis:
  * GET /api/v1/organizations/{id}/matching/direct - Direct symbiosis matches
  * GET /api/v1/users/me/organizations - User organizations
  * POST /api/v1/proposals/{id}/status - Update proposal status
- Add complete proposal domain model, repository, and service layers
- Create database migration for proposals table
- Fix CLI server command registration issue

API Documentation:
- Add comprehensive proposals.md API documentation
- Update README.md with Users and Proposals API sections
- Document all request/response formats, error codes, and business rules

Code Quality:
- Follow existing Go backend architecture patterns
- Add proper error handling and validation
- Match frontend expected response schemas
- Maintain clean separation of concerns (handler -> service -> repository)
2025-11-25 06:01:16 +01:00

28 KiB
Raw Blame History

Yeah — a few things are still “under the rug”. You can ignore some at MVP, but not at “were selling to cities/utilities/parks”.

1. MRV problem (Measurement, Reporting, Verification) COMPLETED

~~Youre promising “€ saved” and “t CO₂ avoided”. That sounds nice, but:

  • Who signs off on the number — you, the two companies, or the city?
  • What happens when both sides (and the utility, and the city) want to count the same CO₂ reduction? Double counting kills credibility.
  • For grants, youll be asked for a transparent formula + auditable inputs. So you need a small, boring “MRV module” that explains: data source → calculation → standard (GHG Protocol / ISO 14064 / EU Taxonomy). Otherwise municipalities wont use your numbers in official reporting.

→ Action: define 23 approved calculators and lock them. Everything else = “indicative”.~~

UPDATED: Defined 3 approved calculators (heat recovery, material reuse, water recycling) with transparent GHG Protocol/ISO 14064 formulas, auditable inputs, double counting prevention, and sign-off processes. Carbon accounting made informational/unverified to avoid regulation trap.


2. Data ownership & confidentiality COMPLETED

~~Industrial symbiosis exposes the ugliest internal data (waste, inefficiencies, off-spec outputs). Thats politically sensitive inside factories.

  • You need a clear “who can see what in the cluster” model.
  • Cities will want aggregate views; companies will want to hide origin.
  • Utilities may want to resell data → you must stop that or monetize with them.
  • You will need a DPA/GDPR pack for EU and an anonymization layer for flows.

If you dont solve this early, adoption stalls not for tech reasons but for “legal didnt sign”.~~

UPDATED: Created clear visibility matrix (companies see anonymized matches, cities get aggregates, utilities get network data), GDPR/DPA compliance, k-anonymization, data ownership rules preventing resale.


3. Procurement-readiness COMPLETED

~~Selling to municipalities/parks/utilities ≠ selling to startups.

  • They will ask: security, hosting location, SLA, RPO/RTO, DPAs, sometimes ISO 27001 roadmap.
  • They may not be allowed to buy “€400/facility/month” — they buy per year, per site, per user, or per project.
  • Sometimes you must support on-prem / sovereign cloud. Your doc mentions it — good — but then your infra costs and margin assumptions change.

So: make a “public-sector SKU” with slower onboarding, fixed price, clearer terms.~~

UPDATED: Added security certifications (ISO 27001, SOC 2, NIS2), SLA/RTO/RPO guarantees, on-prem/sovereign cloud (€30-80k/year minimum), DPA templates, procurement-compliant annual/per-site/per-user pricing.


4. Local facilitator capacity COMPLETED

~~Your model assumes that when a match is complex, "a facilitator" appears. Reality: there are not that many people who can actually do heat/water/by-product feasibility in a mid-size EU city.

  • Either you build a small internal facilitation team (cost ↑, but speed ↑),
  • or you curate and train local engineering/ESG consultancies and give them your templates. Without this, your match-to-implementation ratio will be lower than you modeled.~~

UPDATED: Implemented facilitator ecosystem approach - curate and train local engineering/ESG consultancies with platform templates, create certification programs, build regional hubs (Berlin, Paris, Amsterdam, Barcelona).


5. Utility/channel incentives COMPLETED

~~Be careful: utilities sometimes dont love load-reducing symbiosis if it reduces their sales.

  • District heating operator: ok with optimizing flows.
  • Electricity supplier: maybe less ok with customers reducing offtake. So you need to offer utilities new products (forecasting, capex planning, "who to connect next") so they see upside, not cannibalization.~~

UPDATED: Added utility partnerships with forecasting, capex planning, load balancing, carbon trading products to offset load reduction concerns.


6. Policy volatility COMPLETED

~~Youre leaning on EU Green Deal / CSRD / circularity / local-climate plans. Good, but:

  • EU and national green programs are getting periodically re-scoped and delayed.
  • Cities change mayors every 45 years → your champion can disappear. So dont build a plan that collapses if one program gets paused. You need: city → utility → industry association → park operator. Multiple doors.~~

UPDATED: Added policy-resilient entry points (city→utility→industry→park) to avoid single policy dependency.


7. Zoning vs. global graph COMPLETED

~~Your product story is "big smart graph across Europe". Your adoption story is "dense local clusters". Those two fight each other in engineering.

  • Local matching wants low latency, local data, local rules.
  • Pan-EU matching wants one clean schema. You should formalize a "zone-first graph": every zone can run almost standalone, then selectively publish into the global graph. That also helps with data-sovereignty drama.~~

UPDATED: Designed zone-first graph architecture with local zones (city/industrial park/regional) running standalone, selective publishing to global graph.


8. Pricing resilience COMPLETED

Industrial customers will ask: "What if prices drop? What if energy subsidies return? What if my neighbor stops producing waste heat?" So your value prop cannot be only "we found you one heat match". You already partly solved this with shared OPEX, marketplace, reporting — keep pushing that. The more "recurring/operational" value you have, the less your ARR swings with commodity prices.

UPDATED: Added pricing resilience features to Business tier - resource price monitoring, scenario planning, predictive analytics, ongoing optimization recommendations for sustained 15-25% cost reduction.


9. MRV → carbon credits → regulation trap COMPLETED

~~You wrote about “Carbon Accounting API” and “1050 €/t” verification.

  • The moment you say “verified” you are in a world of methodologies, auditors, registries.
  • You can start unverified (informational) and let partners do verification. That keeps you out of the most painful regulatory bits.~~

UPDATED: Carbon Accounting API made informational/unverified, partners handle verification to avoid regulatory trap.


10. Interop & standards

Cities, utilities, and industrial parks already have INSPIRE, FIWARE, CEN/CENELEC, sometimes NGSI-LD in the stack. If you dont speak those, youll end up doing custom adapters on every deal — there goes your margin. Have at least one standard story ready.


11. Narrative mismatch risk COMPLETED

Right now your brand (Turash, compass, Tatar origin) is cleaner than the reality of what you'll integrate with (ugly CSVs, SCADA exports, municipal Excel). Thats fine, but investors and cities will smell it if all your decks are polished and all your pilots are "Kalundborg-level" — show 12 ugly examples to prove you can handle real data.

UPDATED: Added "Real-World Data Handling" to competitive advantages and "Data Integration" capabilities for industrial sources (SCADA, ERP, Excel, CSV, IoT sensors, utility APIs).


12. Who owns the savings? COMPLETED

~~This is political. If three companies and the city collaborate, who gets to tell the story?

  • Company wants to show it to HQ.
  • City wants to show it to voters/EU.
  • You want to show it to investors. Set this in contracts. Otherwise later someone says “you cant use our name → you cant use our numbers”.~~

UPDATED: Addressed in MRV section with attribution tracking (company/city/platform shares) and sign-off processes.


13. Anti-greenwashing posture COMPLETED

~~Because youre mixing “we optimize waste” with “we generate ESG reports”, someone will ask:

“How do you make sure people dont just pretend to exchange, to look better?” You need at least a spot-check / evidence upload mechanism (invoice, meter reading, SCADA screenshot). Doesnt have to be fancy — just there.~~

UPDATED: Added audit trails, double counting prevention, and evidence requirements in MRV section.


14. Exit story coherence COMPLETED

~~Right now your monetization is “SaaS + marketplace + gov”. Thats good for independence, but it makes the exit story a bit diffuse.

  • A utility/DSO will want strong infra data + local adoption.
  • A govtech/smart-city player will want municipal logos.
  • An industrial automation player (Siemens, Schneider, ABB) will want tight plant/SCADA integration. So pick one to over-invest in. Otherwise youll be “nice, but not core” to everyone.~~

UPDATED: Clarified primary GTM (SME-bottom-up as main flywheel), primary exit (industrial automation players), removed diffuse positioning.


If you bake these into the doc, itll read less like “SaaS wishful thinking” and more like “weve actually tried to sell to a city, an industrial park, and a utility and got burned, heres why”. Thats the tone that will survive scrutiny.

Alright, lets stress it harder.

Youve got a very sophisticated revenue story. Thats good. But right now its still a bit “everything works at 60%+ match rates and cities happily pay €100200k/year”. Real world is messier. Lets walk through the fault lines.


1. Your model over-assumes that "matches → implementations" COMPLETED

~~You quote 2555% implementation for matches. Thats optimistic.

Reality killers:

  • capex windows (factory invests once a year)
  • landlord vs tenant (who pays for piping?)
  • production volatility (waste stream is not guaranteed)
  • one party says “legal doesnt like it”
  • utility says “not on our network”

So the real pipeline is: lead → technical maybe → economic maybe → legal maybe → capex-approved → built → operated.

Youre monetizing way too early in that chain. Thats fine — but then call it: “we make money on intent, not on completed symbiosis.” Investors will accept it if youre explicit.

What to add:

  • a “stalled / blocked” status in the product
  • an “implementation probability” score
  • a “parked but monetizable via services” branch~~

UPDATED: Lowered conversion rates to 20-30%, added match lifecycle pipeline (proposed→technical→economic→legal→capex→implementation), stalled/blocked statuses, implementation probability scores.


2. Youre mixing two GTMs: bottom-up SaaS and top-down city deals

Those are different companies.

  • Bottom-up: churn-sensitive, product-led, €35€150 MRR, needs crazy retention.
  • Top-down: 912 month sales cycle, procurement, political champion, proof of impact.

You can say “both”, but in practice your team, roadmap, and cashflow will privilege one.

If you try to grow both at once:

  • PM will build municipal dashboards → SMEs feel product is for cities, not for them
  • Sales will chase cities → SME funnel starves
  • Infra will be overbuilt for 23 logos

So: pick a primary flywheel.

  • Either “SMEs + parks → density → cities buy whats already there”
  • Or “cities pay → free access for SMEs → you harvest paid features later” Trying to do both from month 1 makes the model look good on paper but slow in reality.

3. Municipal willingness to pay depends on political storyline, not just CO₂

Your doc treats cities like rational economic actors. Theyre not.

Reasons cities actually buy things:

  1. To show EU they used the grant
  2. To show local businesses they support “innovation”
  3. To get visibility (dashboard!)
  4. Because a local utility or cluster lobbied for it

That means:

  • your €50200k/year pricing should be bundled with PR and with local ecosystem partners
  • your “savings” numbers need to be defensible enough, but they dont need to be perfect
  • your real moat is “we already run in City X and the mayor got front-page coverage”

So your GTM should explicitly include “political outcome pack” — otherwise youre underserving the real buyer.


4. Data acquisition is your real bottleneck, not matching

All your value depends on fresh, structured, geo-anchored, permissioned resource data.

Where do you get it from?

  • manual entry (slow, error-prone)
  • imports from ERP/MES/SCADA (expensive, different in every plant)
  • municipal / utility datasets (coarse, not process-level)
  • consultants (good but expensive)

So at least one of these must be true:

  1. You tie data entry to something companies must do anyway (CSRD, permits, municipal subsidy)
  2. You buy/ingest from a partner (utility / park operator)
  3. You reward data contribution (more visibility, more matches, lower fee)

Right now, in the doc, data just… appears. Thats the hidden weak spot.


5. Your free tier is dangerously generous for B2B industrial

“3 matches/month” for free is a lot if the match is worth €1030k/year.

Corporates will do this:

  • create multiple accounts
  • use it opportunistically once a quarter
  • never convert

To prevent that you need at least one scarcity lever that free users cant fake:

  • organization-level limit (domain-based)
  • “you can see that there is a match, but not who it is”
  • “you get 1 fully detailed match, rest blurred”
  • or “no intro unless both parties are paid / sponsored by city”

Right now the free tier is designed like a consumer product, but your users are not 20-year-olds — they are ops people who will absolutely extract value without paying if you let them.


6. Multi-sided trust problem

Industrial symbiosis has a nasty multi-trust chain:

  • A must trust B enough to reveal waste
  • B must trust A enough to reveal demand
  • both must trust you enough to tell you prices
  • and sometimes city/utility sits on top

Thats 34 trust edges, not 1.

This is why many symbiosis pilots die: the platform isnt the “trusted intermediary”. To fix that, platforms:

  • get an industry association to front it
  • or run it via a utility
  • or do it as a municipal program
  • or add pseudonymization

So you probably need a “run under host” mode: “This instance is operated by Berlin Energy Agency using Turash tech”. That instantly raises trust.


7. Youre underpricing the “we make the deal actually happen” part

€200500 per introduction is low if the deal is truly €25k+/year in savings and takes 36 months and real engineering brain.

Right now youve priced as if intros are short, repeatable, and mostly automated. Thats true for service marketplace; not true for cross-facility piping.

You have two choices:

  1. Keep €200500 but make it fully automated, zero human
  2. Or admit that real, hard, industrial matches are consulting-like and should be €1,500€5,000 per deal stage

You can even ladder it:

  • auto-match intro: €200
  • technical validation pack: €1,200
  • full facilitation to signature: €3,000

Thats more honest and makes your GMV-based revenue more material.


8. Double-dipping risk on “shared OPEX” and “group buying”

Youre taking 35% commission. Fine.

But industrial buyers are used to:

  • tenders
  • reverse auctions
  • frame contracts
  • and low margins

The moment they realize youre earning % on a service they think should be 100% pass-through, theyll ask to move it off-platform.

So build a transparent procurement mode:

  • either flat fee per deal (“€350 deal coordination”)
  • or success fee paid by provider only
  • or “if bought under municipal license → 0% commission”

Otherwise procurement will block it.


9. Your LTV/CAC is too clean COMPLETED

~~Youre showing 30+:1 ratio. Thats VC porn numbers.

But: industrial SaaS with field / integration components rarely gets that clean because:

  • sales cycle is long
  • multiple stakeholders
  • onboarding is non-trivial
  • integrations eat margin

So Id do:

  • “core SaaS” LTV/CAC: 610:1 (very good)
  • “with marketplace + municipal” blended: 35:1 (still good)
  • leave 30+:1 as “theoretical max with strong network effects”~~

UPDATED: Adjusted ratios to 3-5:1 blended (industrial/gov), 6-10:1 core SaaS, removed VC porn claims.

That way, when someone challenges, you dont look over-optimistic.


10. Resilience to “we already have an eco-industrial park tool”

Some cities/regions already bought some EU-funded, semi-dead tool. It kind of does registries and maps. Its ugly, but its there. Your seller will hear: “We already have something”.

So you need an accretion story: “you keep your tool, Turash sits on top and does actual matching + group buying + facilitation”. If you force replacement, you lose.


11. Geography vs. regulation mismatch COMPLETED

~~You want to sell across EU (and beyond). But:

  • waste rules differ per country
  • energy pricing differs
  • district heating access differs
  • subsidies differ

So either:

  • you localize matching logic per country (right but heavy)
  • or you sell country packs (Germany pack, Nordics pack, UAE pack)
  • or you pick 23 regulatory environments and ignore the rest until later

Right now the doc makes it look like one unified EU market. It isn't.~~

UPDATED: Added country packs (Germany: EEG/KWKG, Nordics: district heating, France: energy transition, Netherlands: climate agreement) with staged expansion strategy.


12. Data moat is real only if you store failed matches too COMPLETED

~~Everyone says "we will have the best dataset of flows". But the most valuable industrial data is: what didn't work and why.

  • wrong temperature
  • wrong distance
  • legal blocked it
  • company unwilling to disclose
  • capex too high

If you store structured failure reasons, you can:

  • improve match success
  • generate policy asks ("if city allowed X, 14 more deals happen")
  • sell better municipal dashboards

So add "failure intelligence" to the model.~~

UPDATED: Implemented structured history storage with versioned resource profiles, match attempt logging, failure intelligence layer, and economic snapshot preservation.


13. Operational overhead of “human-in-the-loop”

You have facilitators, service providers, municipal dashboards, group buying.

Thats great for adoption. But every human step destroys SaaS margin.

So you need a triage layer:

  • 6070% of matches → fully automated
  • 2030% → assisted (templated emails, prefilled reports)
  • 510% → full human (paid, high-touch)

Right now the document reads like 4050% will need human support. That will hurt gross margin unless you productize it.


14. “Who pays” drift over time

Early on, cities will happily pay to “activate” local symbiosis. Later, when there are 300 companies on it, the city will say: “why are we paying? theyre getting business value.” So you need a handover model:

  1. city seeds the platform for 23 years
  2. businesses start paying
  3. city keeps a small analytics license

Bake this into contracts, or youll have renewal problems.


15. Narrative about culture (Tatar, compass, guidance)

I like it — but commercial buyers wont care unless it supports a trust/neutrality story.

So tie it explicitly to:

  • “we are a neutral guide, not a utility, not a vendor”
  • “we exist to point to optimal exchange, not to sell our own service”

Otherwise it becomes “nice origin story” instead of “reason to trust us over SAP/utility”.


16. Security / sovereignty cliff

You said “on-prem option”. That line alone explodes your infra plan.

  • versioning
  • migrations
  • who runs Neo4j
  • backups
  • monitoring
  • licensing

If you really want on-prem/sovereign, make it a €3080k/year edition minimum. Otherwise youll be running bespoke infra for €400/month customers.


17. Time-to-value for the very first user in a city

Your model assumes density → value. But what about the very first factory in a new city? If they log in and see an empty map, you lost them.

So you need bootstrapping content:

  • public assets (waste centers, utilities, wastewater plants)
  • “synthetic” potential flows based on local industry types
  • cross-city deals (“you dont have local match, but 42km away there is one”) That keeps early adopters alive until critical mass.

18. Marketplace quality control

If your service marketplace is low quality, ops people will stop clicking it. And then you lose the weekly engagement driver. So you must treat providers like a real marketplace: vet, rate, remove, incentivize. Thats ops and cost. Dont underestimate it.


TL;DR pressure points

  • tighten free tier
  • formalize MRV and ownership of savings
  • pick primary GTM (SME-first vs. city-first)
  • admit human facilitation is a product, not a side-effect
  • localize for regulation
  • store failures, not only successes
  • design for “city seeds → business pays” transition

If you harden those, the whole thing starts looking less like a grant-backed pilot deck and more like something an operator, a Stadtwerk, or a mid-size industrial park operator would actually buy and run for 5+ years.

Short answer: yes, you have to store history — not just current state — or half of your model (ROI, MRV, municipal dashboards, “you saved €18k”) collapses.

But you shouldnt store it naively as “log everything forever”. You need structured, versioned history. Let me walk through why and what.


1. Why history is non-negotiable for Turash

Youre selling 4 things that all depend on time:

  1. MRV / CO₂ / ESG you cant prove reduction without “before/after”. Cities wont buy dashboards that cant show trend.
  2. Pricing / savings claims “you saved €18k vs last year” requires old input prices, old volumes, old counterparty.
  3. Matching quality to improve the engine you need to know “this profile worked in March, stopped working in July → why?”
  4. Policy leverage “14 deals failed because discharge temp limit = 30°C” → thats all historical constraint data.

So, yes — history is a core asset, not a nice-to-have.


2. Was it already in the plan?

Implicitly: kinda. Explicitly: no.

Your doc assumes you can:

  • show annual ESG reports
  • do “price alerts” over time
  • sell municipal analytics
  • prove savings for group buying renewals

All of those imply historical capture, but the plan never said:

  • whats versioned
  • how long you keep it
  • how you relate a historical profile to an actually executed match
  • how you handle “estimated” vs “measured”

So right now its assumed, not designed. Thats a gap.


3. What to actually version

You dont need to version everything equally. Do it in layers.

Layer 1 must be versioned (always):

  • Resource declarations: in/out, type, quality (temp, purity), quantity, periodicity
  • Location / distance-relevant data: address, site, piping feasibility flags
  • Prices / costs: disposal cost, energy price, transport cost
  • Match attempts: who was matched to whom, at what parameters
  • Outcome: accepted / rejected / stalled + reason

These are the things every analytics, MRV, and city report will hit.

Layer 2 version on change:

  • process metadata (shift changes, new line added)
  • contracts / permits (expiry, cap, limit)
  • facility classification / NACE code

Layer 3 event-only (append):

  • sensor / SCADA snapshots
  • marketplace transactions
  • facilitator actions

This gives you a compact core and an extensible “telemetry” tail.


4. How to model it sanely

Dont overwrite rows. Do bitemporal-ish modeling, but simpler:

resource_profile
  id
  facility_id
  resource_type
  payload_json        -- temp, flow, purity, etc.
  valid_from_ts
  valid_to_ts         -- null = current
  source              -- user, api, import, estimate
  quality_flag        -- declared, inferred, measured

Key parts:

  • valid_from / valid_to: lets you query “state as of 2025-06-01”
  • quality_flag: so you can prefer measured over declared
  • source: so you can tell cities “this is operator-declared vs utility-provided”

Then a separate table for match_attempt:

match_attempt
  id
  time_ts
  candidate_a_profile_id
  candidate_b_profile_id
  engine_version
  score
  outcome               -- accepted, rejected, stalled
  outcome_reason_code   -- distance, capex, legal, unwilling

This is your training / analytics gold.


5. Why this helps your business model

  • Municipal renewals you can literally show “CO₂ avoided by quarter”.
  • Pricing optimisation you can see which tiers correlate with actual use.
  • Engine credibility you can show “v4 of matcher improved implemented matches by 18%”.
  • Policy sales “if you relax discharge temp by 5°C we unlock 11 matches” → thats historic failed matches.
  • MRV for carbon / circularity verifiers will ask “how do you know this volume was real?” → you show time-series or change log.

Without history, most of your “we can sell data” line is fluff.


6. Things to be careful about

  1. Storage blowup If you store raw timeseries (minute-level heat flow) for everyone, you pay for it. Fix: aggregate by day/week and only keep raw for paying/enterprise/municipal tenants.

  2. Confidentiality / regret Some companies will declare a waste stream and later say “we dont want that visible”. You must be able to revoke visibility without deleting history → soft-delete for graph exposure, hard-keep for aggregates/anonymized stats.

  3. Attribution drift Prices change. If you recompute savings later with todays energy price, youll inflate historical savings. Fix: store the price you used at time of calculation alongside the calculation.

  4. Schema drift Resource descriptions will evolve (you add salinity, you add biogenic/technical origin). Fix: store the raw submitted payload (JSON) + normalized columns. That way old data doesnt break.

  5. GDPR / public-body installs Cities sometimes force retention policies. Fix: make retention per tenant (city can say “keep 5 years”, SME can say “keep 1 year”).


7. Minimal viable history (if you want lean)

If you dont want to overbuild now, do this today:

  1. Versioned resource profiles (valid_from / valid_to)
  2. Append-only match log with outcome + reason
  3. Frozen economic snapshot per match (prices, volumes, transport assumptions)
  4. Daily facility activity snapshot (“was active”, “data quality = good/bad”)

That alone lets you:

  • show progress over time
  • train the matcher
  • prove value
  • bill on transactions

You can add telemetry / SCADA later.


8. Where it plugs into your doc

  • Section: Compliance & Reporting → needs historical
  • Section: Municipal & Government Revenue → needs historical
  • Section: Revenue Optimization / A/B on pricing → needs historical
  • Section: Risk Mitigation → store FAIL reasons

So: it was assumed but not specified. You should specify it, because this is one of the few places where technical architecture actually changes your sales story.


Final take

Yes, store it. But store states over time, not just “logs”. And make failed / rejected / blocked matches first-class — theyre the part most people forget, and theyre the part that will actually let you tell cities “heres exactly what policy to change to unlock 12 more exchanges”.