The foundation of modeling and synthesizing reactive processes is the theory of graph games with omega-regular winning conditions. In the case of stochastic reactive processes, the corresponding stochastic graph games have three players, two of them (System and Environment) behaving adversarially, and the third (Uncertainty) behaving probabilistically. We consider two solution problems for stochastic graph games: a qualitative problem, calling for the computation of the set of states from which a player can win with probability 1 (almost-sure winning), and a quantitative problem, calling for the computation of the maximal probability of winning (optimal winning) from each state. We show that, for Rabin winning conditions, both problems are in NP. As these problems were known to be NP-hard, it follows that they are NP-complete for Rabin conditions, and dually, coNP-complete for Streett conditions. The proof proceeds by showing that pure memoryless strategies suffice for qualitatively and quantitatively winning stochastic graph games with Rabin conditions. This fact was an open problem, and it is of interest in its own right, as it implies that controllers for Rabin objectives have simple implementations. We also prove that for any omega-regular objective optimal winning strategies are no more complex than almost-sure winning strategies.