Description
In computer science, the subject of algorithmic fairness has received much attention, yet the understanding that algorithms can have disparate impact on populations through various dynamic mechanisms is recent. We contribute to this evolving understanding by presenting two different models of the dynamic interactions of machine learning algorithms and populations of interest. First, we introduce the notion of delayed impact---the welfare impact of decision-making algorithms on populations after decision outcomes are observed, motivated, for example, by the change in average credit scores after a new loan approval algorithm is applied. We demonstrate that several statistical criteria for fair machine learning proposed by the research community, if applied as a constraint to decision-making, can result in harm to the welfare of a disadvantaged population. Next, we consider a dynamic setting where individuals invest in a positive outcome based on their expected reward from an algorithmic decision rule. We show that undesirable long-term outcomes arise due to heterogeneity across groups and the lack of realizability, and study the effectiveness of interventions such as `decoupling' the decision rule by group and providing subsidies.
Adjacent to the question of long-term fairness, another challenge faced in the utilization of machine learning for societal benefit is that of social choice. In markets, individual learning objectives---typically conceived---may conflict with the long-term social objective of reaching an efficient market outcome. Motivated by repeated matching problems in online marketplaces and platforms, we study two-sided matching markets where participants match repeatedly and gain imperfect information about their preferences through matching. Due to competition, one participant's attempt to learn their preferences may affect the utility of other participants. We design a machine learning algorithm for a market platform that enables the market as a whole to learn their preferences efficiently enough to quickly attain a notion of market fairness known as stability. Further, we study a decentralized version of the aforementioned problem, and design learning algorithms for participants to strategically avoid competition given past data, thus removing the need for a central platform. We also investigate whether strategic participants with the temptation to act independently should still follow the algorithm's recommendations, showing several positive results on the algorithms' incentive compatibility.