### Description

The first model we study is the selfish routing model. In the selfish routing model, we analyze the efficiency of users selfishly routing traffic and study a counterintuitive phenomenon known as Braess's Paradox, which states that adding a link to a network with selfish routing may actually increase the latency for all users. We produce tight and nearly tight bounds on the maximum increase in latency that can occur due to Braess's Paradox in single-commodity and multicommodity networks, respectively. We also produce the first nearly tight bounds on the maximum latency that can occur when traffic is routed selfishly in multicommodity networks, relative to the maximum latency that occurs when traffic is routed optimally.

In the second part of the thesis, we study random graph models which have been used to model the Internet, and look for properties of graphs generated by those models, which can be used to derive simple compact routing schemes. The goal of compact routing is to derive algorithms which minimize the information stored by nodes in the network, while maintaining the ability of all nodes to route packets to each other along relatively short paths. In this research area, we show that graphs generated by several random graph models used to model the Internet (e.g. the preferential attachment model), can be decomposed in a novel manner, which allows compact routing to be achieved easily. Along the way, we also prove that a Polya urns random process has good load balancing properties, which may be of independent interest.

In the last part of the thesis, we study an online bipartite matching problem, which models a problem occurring in the area of Internet service provision. In our online bipartite matching problem, we imagine that we have some servers capable of providing some service, and clients arrive one at a time to request service from a subset of servers capable of servicing their request. The goal of the problem is to assign the arriving clients to servers capable of servicing their requests, all while minimizing the number of times that a client needs to be switched from one server to another server. Although prior worst case analysis for this problem has not yielded interesting results, we show tight bounds on the number of times clients need to be switched in a few natural models.

As we analyze these problems arising in the Internet from a precise mathematical perspective, we also seek to reflect on the process used to solve mathematical problems. Although the thought process can sometimes be difficult to describe, in one case, we attempt to provide a step-by-step account of how the final result was proved, and attempt to describe a high level algorithm, which summarizes the methodology that used to prove it.