Rate My Professors
Rate My Professors is an online platform founded in May 1999 by software engineer John Swapceinski in Menlo Park, California, initially under the name TeacherRatings.com, which facilitates anonymous student evaluations of postsecondary instructors through numerical ratings and textual comments on teaching effectiveness, course difficulty, and overall quality.[1][2] The site rebranded to RateMyProfessors.com in 2001 and has since expanded to cover over 1.7 million professors at approximately 8,000 institutions, accumulating more than 19 million reviews that influence student course selections.[3][4] Key features include aggregated scores for overall quality, clarity, helpfulness, and a "would take again" metric, derived from user-submitted data without institutional verification, enabling rapid but unmoderated feedback.[5] Ownership transitioned through acquisitions, including by Viacom in the mid-2000s and later by Cheddar in 2018, reflecting its commercial evolution amid growing student usage for pre-enrollment decisions.[2] While valued for democratizing access to peer insights on academic experiences, Rate My Professors faces scrutiny for reliability, as anonymous postings invite subjective biases, including correlations between high ratings and perceived easiness or instructor attractiveness rather than substantive teaching merit, per empirical analyses of review patterns.[6][7] Academic studies highlight moderate alignment with official evaluations but underscore causal distortions from self-selected, grade-influenced submissions, positioning the platform as a supplementary rather than authoritative tool amid broader debates on accountability in higher education.[8]History
Founding and Early Years
Rate My Professors was founded in May 1999 by John Swapceinski, a software engineer based in Menlo Park, California, who was pursuing a master's degree at San Jose State University.[2][1] The platform initially launched as TeacherRatings.com, allowing students to submit anonymous evaluations of college instructors based on criteria such as clarity, helpfulness, and overall quality.[9] Swapceinski developed the site in response to his own unsatisfactory experience with a professor, aiming to provide peers with candid insights into teaching effectiveness absent from official channels.[10][11] In 2001, the website rebranded to RateMyProfessors.com, expanding its scope to include searchable databases of professors across U.S. and Canadian institutions while maintaining user anonymity to encourage honest feedback.[9] Early adoption was driven by word-of-mouth among students seeking alternatives to limited administrative evaluations, with the site's simple rating scales—typically on a 1-5 point system for attributes like easiness and hotness—facilitating quick submissions.[12] By early 2004, it had amassed nearly 1.5 million ratings covering professors at almost 4,000 schools, reflecting organic growth amid rising internet access on campuses.[13] The platform's early years saw steady expansion in user-generated content, tripling to over six million ratings by 2007, though this period also introduced debates over review validity due to potential biases in anonymous postings.[13] Swapceinski emphasized the site's role in empowering students with data-driven choices, positioning it as a consumer-oriented tool in higher education.[14]Expansion and Acquisition
Following its relaunch as RateMyProfessors.com in 2001, the platform experienced rapid early expansion driven by increasing internet adoption among college students, evolving from a niche tool into a national resource covering thousands of institutions.[1] By the mid-2000s, it had grown to encompass ratings for over one million professors based on millions of student submissions, reflecting steady year-over-year traffic increases amid broader web-based community trends.[12][15] In 2005, the site was acquired by software entrepreneurs Patrick Nagle and William DeSantis, marking its first major ownership change and enabling operational scaling under private management.[2] Two years later, on January 17, 2007, Viacom's MTV Networks subsidiary mtvU purchased it from Nagle and DeSantis, integrating the platform into its college-focused media ecosystem to leverage synergies with campus programming and expand user engagement among undergraduates.[16][17] Under Viacom ownership, the site continued to accumulate ratings, reaching approximately 6.8 million total submissions by 2009.[12] Viacom held the platform until October 25, 2018, when digital media company Cheddar acquired it for an undisclosed sum, aiming to reposition it as a utility for student decision-making rather than purely media content.[15] At the time of the Cheddar deal, RateMyProfessors.com featured over 20 million ratings for 1.8 million professors across more than 8,000 schools, with monthly active users exceeding 6 million (peaking seasonally at 7 million) and 125,000 new ratings added each month.[1][15] Post-acquisition, Cheddar invested in site redesigns, enhanced search functionalities, and planned premium analytics tools for educators, further supporting platform growth and monetization.[15] By late 2018, annual revenue stood at roughly $3.4 million, underscoring its established scale in the education review sector.[1]Core Features and Functionality
Ratings and Review System
Users submit ratings and reviews for professors anonymously via the Rate My Professors platform, with submissions required to originate from individuals who have taken or are currently enrolled in the professor's course; only one review per student per course is permitted.[5] The core numerical rating is Overall Quality, assessed on a 1-5 scale, where scores reflect the professor's effectiveness in teaching course material and providing helpfulness, including factors such as availability, approachability, and communication clarity; ratings of 3.5-5 denote good quality (accompanied by a smiley face icon), 2.5-3.4 average (neutral), and 1-2.4 poor (frowny face).[18] The platform's displayed Overall Quality score for a professor is the arithmetic mean of all valid numerical submissions for that instructor.[18] Supplementary ratings include Level of Difficulty, rated from 1 (easiest) to 5 (hardest), which gauges the perceived rigor of the course without influencing the Overall Quality score.[18] Users also indicate whether they "Would Take Again," a binary yes/no option introduced for ratings after May 25, 2016, with the aggregate displayed as the percentage of "yes" responses averaged across qualifying submissions; this metric similarly does not affect Overall Quality.[18] An additional checkbox notes textbook usage, but it carries no numerical weight in aggregations.[18] Prior to May 18, 2016, Overall Quality was derived by averaging separate 1-5 scores for Clarity (clarity of lectures and explanations) and Helpfulness (respectfulness, concern for students, and availability), but the system shifted to a unified Overall Quality input thereafter.[18] Accompanying each submission is an optional textual review, limited to comments on the academic experience, including pros and cons, while adhering to strict guidelines that prohibit profanity, vulgarity, personal attacks (such as references to appearance or private life), unsubstantiated claims of illegal activity, hyperlinks, or non-English text (except French for Canadian institutions).[5] All reviews undergo moderator review prior to posting, with non-compliant content removed; users may flag existing reviews for re-evaluation, though negative feedback is retained unless it violates rules.[5] Professors cannot access submitter identities, ensuring anonymity, but the platform verifies school enrollment for campus-related ratings to maintain relevance.[5]Search and Additional Tools
The search functionality on Rate My Professors centers on a primary query interface where users enter an institution's name to access school-specific pages, enabling subsequent searches for professors by name or navigation through departments and courses.[19] This school-first approach structures results around over 8,000 covered institutions as of 2023, facilitating targeted lookups within relevant academic contexts.[19] Direct professor name searches are also supported platform-wide, though results often require manual refinement due to common names or incomplete indexing.[20] Filtering options remain basic, with users able to sort professor listings by metrics such as overall quality rating (on a 1-5 scale), difficulty level, or "would take again" percentage within a school's page.[21] No robust advanced search engine with multi-criteria sliders or Boolean operators is officially provided, leading some users to rely on external browser extensions or manual department browsing for deeper analysis.[22] Development analyses of similar platforms highlight potential for enhanced filters by subject or rating thresholds, but Rate My Professors prioritizes simplicity over granular controls.[23] Additional tools extend beyond search to include interactive review engagement via "Thumb Wars," where users upvote or downvote individual ratings to boost visibility of helpful feedback.[19] Registered accounts allow management and editing of one's own submitted reviews, promoting user accountability while maintaining anonymity in public displays.[19] The mobile app mirrors web search capabilities with push notifications for new ratings on followed professors, though it lacks unique utilities like syllabus repositories or advisor recommendation engines.[5] These features support course planning but do not include verified integrations with university systems or data export tools for comparative analysis.Rating Methodology
Evaluation Criteria and Scales
Rate My Professors employs a numerical rating system on a scale of 1 to 5 for its primary evaluation criteria, where 1 represents the lowest assessment and 5 the highest.[18] The core criteria include Overall Quality, which measures a professor's effectiveness in teaching course material and providing helpfulness inside and outside the classroom; Level of Difficulty, which gauges the perceived rigor of the course and workload; and Would Take Again, which indicates whether the rater would enroll in another class with the same professor.[18] These ratings are submitted by verified student users and aggregated into averages displayed on professor profiles, with interpretive labels such as "Good" for Overall Quality scores of 3.5–5, "Average" for 2.5–3.4, and "Poor" for 1–2.4, alongside similar thresholds for Difficulty (e.g., "Easy" for 3.5–5) and Would Take Again (e.g., "Yes" for 3.5–5).[18] Prior to May 18, 2016, the Overall Quality score was derived by averaging separate Clarity (effectiveness in explaining material) and Helpfulness (assistance with learning and queries) ratings, both on the 1–5 scale; this methodology was consolidated into the single Overall Quality metric to streamline user input while maintaining focus on instructional efficacy.[18] Users also provide qualitative comments and select from predefined tags (e.g., "Inspirational," "Tough Grader," "Get Ready to Work") to supplement numerical data, though these do not contribute to the scaled averages.[18] Aggregate scores require a minimum number of ratings for reliability, but no explicit weighting is applied for factors like recency or user verification beyond basic enrollment checks.[18] The platform's scales emphasize subjective student perceptions rather than objective metrics, such as standardized test outcomes or peer-reviewed teaching evaluations, potentially introducing variability tied to individual expectations and course contexts.[18] For instance, Difficulty ratings often correlate inversely with Overall Quality in empirical analyses, reflecting a tendency for students to favor less rigorous courses independent of pedagogical merit.[6] No adjustments are made for disciplinary differences, where STEM fields typically receive lower Difficulty scores (averaging around 3.0–3.5 on the 1–5 scale) compared to humanities.[6]User Submission Process and Anonymity
Users submit reviews on Rate My Professors by searching for a specific professor affiliated with their institution, selecting the relevant course if multiple are listed, and completing a rating form that includes numerical scores on a 1-5 scale for overall quality (reflecting teaching effectiveness and student satisfaction) and difficulty (assessing course rigor), a binary "would take again" indicator, optional tags such as "inspirational" or "clear grading criteria," and a free-text comment limited to factual experiences without profanity or personal attacks.[18][5] Submissions require users to affirm they have attended or are attending the class, though no formal verification like enrollment proof is enforced, allowing anonymous posting without account registration.[5][24] A dedicated moderation team reviews every submission prior to publication to ensure compliance with site guidelines, which prohibit off-topic content, harassment, or disclosure of personal information, while permitting only one review per user per professor per course to prevent duplicates or manipulation.[5] Approved reviews appear publicly under the professor's profile without any identifying details about the submitter, such as name, email, or IP address, preserving anonymity even for registered users who may opt to track their own contributions privately.[24] This policy explicitly states that professors cannot identify individual raters, aiming to encourage candid feedback without fear of retaliation.[24] The anonymity feature, while facilitating open expression, relies solely on self-reported attendance and moderator discretion rather than technological or institutional verification, which has raised questions about review authenticity in academic analyses, though the platform maintains it as essential for unfiltered student input.[5][25] Users can edit or delete their own reviews if registered, but once posted, content cannot be altered by professors or third parties except through flagged removal for guideline violations.[5]Empirical Analysis
Correlations with Official Student Evaluations
Studies have investigated the extent to which ratings on RateMyProfessors (RMP) align with official student evaluations of teaching (SETs), typically administered by universities. These analyses generally reveal moderate to strong positive correlations, indicating that RMP serves as a partial proxy for institutional evaluations, though differences in methodology, anonymity, and user bases introduce variability. For instance, a 2009 study of 126 professors at a mid-sized U.S. university found a correlation of r=0.51 between RMP's overall quality rating and SET overall effectiveness scores, with r=0.54 for RMP clarity and SET communication clarity; however, correlations were weaker (r<0.40) for aspects like workload fairness, suggesting RMP emphasizes perceived ease and entertainment over rigorous pedagogy.[26] Further research corroborates these findings. A 2007 analysis reported strong correlations between RMP ratings and in-class SETs, particularly for helpfulness (r>0.60) linked to instructor availability.[27] Similarly, a study matching RMP data to formal SETs yielded r=0.68 for RMP overall quality against corresponding SET items, with substantive alignment on clarity (r=0.62) and interest/stimulation (r=0.55), based on data from multiple institutions.[28] A 2017 examination of publicly available evaluations across U.S. universities identified a strong overall association between RMP scores and institutional SETs, though institutional policies on SET anonymity and timing influenced the strength.[29]| Study | Sample | Key Correlations | Notes |
|---|---|---|---|
| Snyder (2009) | 126 professors, one university | Overall quality: r=0.51; Clarity: r=0.54 | Lower for workload-related items; supports convergent validity but highlights RMP's focus on subjective appeal.[26] |
| Coladarci & Kornfield (2007) | In-class SETs vs. RMP | Helpfulness: r>0.60 with availability | Emphasizes behavioral factors over content mastery.[27] |
| You & Baek (2013) | Multiple institutions | Overall quality: r=0.68; Clarity: r=0.62 | Stronger alignment on engagement metrics.[28] |