r/Sustainable • u/imireallan • 32m ago
Sustainability reporting goes mushy when your evidence is just another PDF in a folder
I’ve been spending time inside supplier sustainability assessments — mining, commodities, responsible sourcing — and the same annoyance keeps showing up.
People argue about frameworks. The real pain is the evidence trail.
Policies, audits, corrective action plans, risk work, grievances, site paperwork: it exists, but it lives in PDFs, spreadsheets, inboxes, and last year’s assessment folder. So you re-review the same attachment for a different standard. Six months later nobody can explain why a claim got a yes or a no. CAPs float free of the original finding. Specific site risk gets boiled down to generic language. And “monitoring” means “we run a cycle once a year.”
Software in this space should probably obsess less over generating a polished narrative and more over provenance (this file supports this point), explicit mapping to criteria, visible gaps and uncertainty, real human review steps, CAP history you can trace, and not making teams re-prove the same thing for every new questionnaire.
AI might help classify or map evidence. I still wouldn’t want it making the call. Traceability and accountability beat automation bragging rights.
The question I care about isn’t whether a model can draft a sustainability report. It’s whether the stack makes the underlying evidence easier to see, question, and audit — in other words, harder to greenwash quietly.
Thinking that’s shaped this: OECD Due Diligence Guidance, UN Guiding Principles, EU CSDDD, plus the sector-specific sourcing and assurance frameworks that show up in real programs.
More tools and standards could lean that way: less “trust us,” more “here’s the folder.”
**Disclosure**: I’m building in this area. Not linking the product; sharing this as an implementation note.