Responsible AI Foundations
Risk checklist, bias and robustness probes on toy models, and documentation template tying each experiment to a mitigating control.
What was my role?
I was the sole implementer for this coursework module: scoped requirements, wrote the code and experiments for “Responsible AI Foundations,” and produced the write-up with metrics and limitations.
Situation
Course lab for “Responsible AI Foundations”: short deadlines, public or synthetic data, and rubrics that reward reproducible notebooks and honest limitations.
Task
Produce a small credible artifact—clean repo or notebook—with baselines, evaluation, and a crisp story of what would change in production.
Action
Implemented end-to-end (Risk checklist, bias and robustness probes on toy models, and documentation template tying each experiment to a mitigating control.), logged experiments, compared alternatives, and documented dependencies plus failure cases.
Result
Submitted a runnable deliverable with metrics, repeatable setup commands, and a trade-off section suitable for extending to real systems.