Human ratings are subject to various forms of error and bias. Since the early days of performance assessment, this problem has been sizeable and persistent. For example, expert raters evaluating the quality of an essay, an oral communication, or a work sample, often come up with different ratings for the very same performance. In cases like this, assessment outcomes largely depend upon which raters happen to provide the rating, posing a threat to the validity and fairness of the assessment. This book provides an introduction to a psychometric approach that establishes a coherent framework for drawing reliable, valid, and fair inferences from rater-mediated assessments, thus answering the problem of inevitably fallible human ratings: many-facet Rasch measurement (MFRM). Throughout the book, sample data taken from a writing performance assessment are used to illustrate key concepts, theoretical foundations, and analytic procedures, stimulating the readers to adopt the MFRM approach in their current or future professional context.