Prompt Engineering: Stance Classification with LangChain
Structured prompts, output parsers, and regression checks on a frozen stance set so small template edits do not silently break labels.
What was my role?
I was the sole implementer for this coursework module: scoped requirements, wrote the code and experiments for “Prompt Engineering: Stance Classification with LangChain,” and produced the write-up with metrics and limitations.
Situation
Course lab for “Prompt Engineering: Stance Classification with LangChain”: short deadlines, public or synthetic data, and rubrics that reward reproducible notebooks and honest limitations.
Task
Produce a small credible artifact—clean repo or notebook—with baselines, evaluation, and a crisp story of what would change in production.
Action
Implemented end-to-end (Structured prompts, output parsers, and regression checks on a frozen stance set so small template edits do not silently break labels.), logged experiments, compared alternatives, and documented dependencies plus failure cases.
Result
Submitted a runnable deliverable with metrics, repeatable setup commands, and a trade-off section suitable for extending to real systems.