Dr Kate Vredenburgh, Dr Ali Boyle, Professor Alex Voorhoeve & Dr Paola Romero
AI is now embedded in our day-to-day lives, influencing who we date, the new stories we read on our social media feed, how we invest in financial assets, our community’s exposure to the police, the goods we consume online, and the tasks we do at work.
Ethics has often been the last step in the design and deployment of AI technologies. But, new and pending regulation, activism by civil society, and self-governance efforts by companies have sought to integrate values like fairness, safety, and privacy throughout the product development process or decision support system design.
This course introduces you to the core ethics concepts needed to build better technology and reason about its impact on the economy, civil society, and government. In the first half of the course, we consider ethical questions raised by different steps in the data science pipeline, such as:
- What is data, and how can we design better (ethical?) data governance regimes?
- Can technology discriminate? If so, what are promising strategies for promoting fairness and mitigating algorithmic bias?
- Can we understand black-box AI systems and explain their decisions? Why is it morally important that we do so?
In the second half of the class, we consider ethical questions raised by the use of AI systems to manage our work, political, and social lives, such as:
- How does automation impact economic inequality?
- Do employees have a right to privacy at work?
- How does AI concentrate power, and when is this concentration of power objectionable?
- How can we embed human values into AI systems?
- How does AI art challenge authorship and intellectual property rights?