Catalyst Cooperative is a data engineering and analysis consultancy, specializing in energy system and utility financial data. Our current focus is on the US electricity and natural gas sectors. We primarily serve non-profit organizations, academic researchers, journalists, climate policy advocates, public policymakers, and occasionally smaller business users.
We believe public data should be freely available and easy to use by those working in the public interest. Whenever possible, we release our software under the MIT License, and our data products under the Creative Commons Attribution 4.0 License
If you're interested in hiring us email hello@catalyst.coop. We can often make acommodations for smaller/grassroots organizations and frequently collaborate with open source contributors.
Contact Us 💌
- For general support, questions, or other conversations about our work that might be of interest to others, head over to our GitHub Discussions
- If you'd like to get (very) occasional updates about our work sign up for our email list.
- Want to schedule a time to chat with us one-on-one about our software or data? Have ideas for improvement, or need to get some personalized support? Join us for Office Hours
- Follow us on BlueSky: @catalyst.coop
- Connect with us on LinkedIn
- Follow us on Mastodon: @CatalystCoop@mastodon.energy
- Follow us on Twitter: @CatalystCoop (deprecated)
- Subscribe to our channel on YouTube
- Play with our data and notebooks on Kaggle
- Preview, filter, and download our data via web using the PUDL Data Viewer
- Combine our data with ML models on HuggingFace
- Learn more about us on our website: https://catalyst.coop
Services We Provide
- Programmatic acquisition, cleaning, and integration of public data sources.
- Data-oriented software development.
- Compilation of new machine-readable data sources from regulatory filings, legislation, and other public information.
- Data warehousing and dashboard development.
- Both ad-hoc and replicable production data analysis.
- Translation of existing ad-hoc data wrangling workflows into replicable data pipelines written in Python.
- Reproducible data pipeline design, implementation, and ongoing maintenance.
Tools We Use 🔨 🔧
- Python is our primary language for everything.
- Pandas the swiss army knife of tabular data manipulation in Python.
- Dagster for orchestrating and parallelizing our data pipelines.
- DuckDB as a performant, columnar, analysis oriented embedded database. The SQLite of analytical databases.
- Flask for building web-apps like the PUDL Data Viewer
- Pixi, a fast, ergonomic conda package management command line tool.
- Marimo Notebooks for interactive dashboads and data exploration.
- Polars Dataframes for working with larger data tables that don't fit into memory, or are computationally intensive.
- Apache Parquet to persist larger data tables to disk.
- Pydantic for managing and validating settings and our collection of metadata.
- Pandera to specifiy dataframe schemas and data validations in conjunction with Dagster.
- Pyodide to let users access and play with our data in-browser.
- SQLite for local storage and distribution of tabular, relational data.
- JupyterLab for interactive data wrangling, exploration, and visualizations.
- Scikit Learn to construct machine learning pipelines.
- Splink for fast, generalized entity matching / record linkage.
- MLFlow for ML experiment and artifact tracking, mostly in the context of our entity matching / record linkage work.
- Google Batch to minimize the infrastructure we need to manage for our nightly builds.
- Hypothesis for more robust data-oriented unit testing.
- Zenodo provides long-term, programmatically accessible, versioned archives of all our raw inputs.
- Sphinx for building our documentation, incorporating much of our structured metadata directly using Jinja templates.
- The Frictionless Framework as a standard interchange model for tabular data.
- VS Code is our primary main code editor, ever more deeply integrated with GitHub.
- pre-commit to enforce code formatting and style standards.
- GitHub Actions to run our continuous integration and coordinate our nightly builds and data scraping jobs.
Tools We're Studying 🚧
- Agent Skills to give LLM-based coding agents dynamic, specialized context.
- Zensical a beautiful, blazing fast static site generator written in Rust.
- OpenSearch for processing, indexing, and programmatically managing large troves of unstructrured documents.
- HuggingFace Hub as another platform for distributing larger datasets and pre-trained machine-learning models specific to energy system data.
Adjacent Projects ðŸ§
- GridStatus
- Interconnection.fyi
- GridEmissions
- PowerGenome from @gschivley
- The Open Grid Emissions Initiative from @grgmiller & Singularity Energy
- Map Your Grid a project of @open-energy-transition and the @openstreetmap to crowdsource a map of the world's electricity infrastructure for use in open modeling.
Organizational Friends & Allies 💞
- The Open Energy Modeling Initiative
- Open Energy Transition
- CarbonPlan
- The Public Environmental Data Partnership
- The Open Knowledge Foundation
- 2i2c: The International Interactive Computing Consortium
- The Open Energy Outlook
- Code for Science & Society
- The US Research Software Engineering Association
- Diagonal Works
- The Environmental and Data Governance Initiative (EDGI)
- Technology Cooperatives Everywhere!
Funders & Clients 💰 💵
- The PUDL Sustainers
- The Alfred P. Sloan Foundation Energy & Environment Program
- RMI
- GridLab
- Climate Change AI
- The Mozilla Foundation
- Carbon Tracker
- Climate Policy Initiative
- Energy Innovation
- Lawrence Berkeley Lab Energy Technologies Area
- Invenia Labs
- Western Interstate Energy Board
- Flora Family Foundation
- The Deployment Gap Education Fund
Business & Employment 🌲 🌲
Catalyst is a democratic workplace and a member of the US Federation of Worker Cooperatives. We exist to help our members earn a decent living while working for a more just, livable, and sustainable world. Our income comes from a mix of grant funding and client work. We only work with mission-aligned clients.
We are an entirely remote organization, and have been since well before the coronavirus pandemic. Our members are scattered all across North America from Alaska to Mexico. We enjoy a great deal of autonomy and flexibility in determining our own work-life balance and schedules. Membership entails working a minimum of 1000 hours each year for the co-op.
As a small 100% employee-owned cooperative, we are able to compensate members through an unusual mix of wages and profit sharing, including:
- An hourly wage (currently $36.75/hr)
- Tax-deferred employer retirement plan contributions (proportional to wages, up to 25% of wages)
- Tax-advantaged patronage dividends (proportional to hours worked, unlimited but subject to profitability)
We also reimburse ourselves for expenses related to maintaining a home office, and provide a monthly health insurance stipend.
Candidates must do at least 500 hours of contract work for the cooperative within over six months, at which point they will be considered for membership.