<aside> π«
I'm Lena Shakurova, AI Advisor and founder of Conversational AI parslabs.org. I've been building an AI Ethics Starter Kit to help people design more ethical AI systems, and I need research volunteers to help expand it.
This is an open, collaborative effort. Everything we create will be free and publicly shareable.
</aside>
The Starter Kit currently covers 5 tech risk zones β from data privacy to environmental harm. But there's more work to do before it becomes a truly useful resource for practitioners, builders, and policy makers.
I also want to run more AI ethics workshops and open events, and a well-researched, well-documented resource is the foundation for that.
The goal is to start with creating the Notion central hub with all the risks identified and documentation, with examples, a checklist to test if your product poses this risk, and practical recommendations on how to mitigate the risk.
Later this document can be turned into workshops, open lectures, checklists and CustomGPTs to help people audit their products and design ethical AI systems.
This is just getting started. The structure you see here is my current thinking on how to move this forward, it will evolve. It all started with a talk I gave on Maven (watch it here) which gives the best context for what we're trying to build together.
I'm fully open to suggestions on how to shape this. If you have ideas, I want to hear them.
There are 2 types of contributions needed. You pick one risk factor + one type of contribution, and that becomes your project.
1-2 people needed
Help define what risks belong in this kit before we document them.
1 person per risk
Pick one risk from the list below. Your task is to fully document it using this structure:
For each risk we need:
See π this Notion page for initial risk factor page. Ideas around the risk page structure are welcome.

If you havenβt yet, please watch this video for more context π: https://www.youtube.com/watch?v=jqkWVDi_9GY