Secondary schools use the SAT results

SAT - SAT

Standardized test that is widely used in the United States for college admissions
This article is about the college admission test in the United States of America. For the exams in England, known colloquially as SATs, see National Curriculum Assessment.
Art Paper-based standardized test
Developer / Administrator College Board, Education Examination Service
Knowledge / skills tested Writing, critical reading, math
purpose Admission to Bachelor programs from universities or colleges
The year has started 1926; 95 years ago (1926)
Duration 3 hours (without attachment) or 3 hours 50 minutes (with attachment)
Score / grade range The test was rated on a scale of 200-800 (in 10-point increments) in each of the two sections (400-1600 total).
The essay was rated on a scale of 2 to 8 in 1 point increments for each of the three criteria
Offered 7 times a year
Countries / regions Worldwide
languages English
Annual number of test participants Over 2.19 million high school graduates in class by 2020
Requirements / admission criteria No official requirement. Intended for schoolchildren. Business fluent English required.
fee US $ 52.50 to US $ 101.50, depending on the country.
Notes / notes from Most universities and colleges offer undergraduate programs in the United States
website sat .collegeboard .org

The SAT (/ ˌ ɛ s ˌ eɪ tí / ess-AY- TEA ) is a standardized test widely used for college admissions in the United States. Since its debut in 1926, the name and rating have changed several times. originally as Scholastic Aptitude Test , later than Scholastic Assessment Test , then as SAT I: Reasoning Test , then as SAT Reasoning Test , then simply as SAT .

The SAT is 100% owned, developed, and published by the College Board, a private, not-for-profit organization in the United States. It is administered on behalf of the College Board by the Educational Testing Service, which until recently also developed the SAT. The test is designed to assess student readiness for college. The SAT was originally designed not to align with the high school curriculum. However, several adjustments have been made for the version of the SAT introduced in 2016. College Board President David Coleman said he also wanted the test to reflect in more detail on what high school students are learning with the new Common Core standards.

The SAT lasts three hours and costs $ 49.50 as of 2019 ($ 64.50 with the optional attachment, although it is no longer available), with no late fees, with additional processing fees if the SAT is conducted outside of the United States becomes. The SAT scores range from 400 to 1600 and combine test scores from two sections with 200 to 800 points: the math section and the evidence-based reading and writing section. Although participation in the SAT or its competitor ACT is required for freshmen at many colleges and universities in the United States, many institutions made these entrance exams optional in the 2010s, but this did not prevent students from attaining a high level Results as she and her parents are skeptical of what "optional" means in this context. In fact, the test population increased steadily. And while this may have resulted in a drop in scores over the long term, experts cautioned against using this to measure the educational level of the entire US population.

From the 2015-16 school year, the College Board began working with Khan Academy to provide free SAT preparation. On January 19, 2021, the College Board announced the discontinuation of the optional essay section and its SAT subject exams after June 2021.

function

The US states in blue had more seniors in the 2006 class taking the SAT than the ACT, while the red states had more seniors taking the ACT than the SAT.
U.S. states in blue had more seniors in the 2020 class participating in the SAT than in the ACT, while in red there were more seniors in the ACT than in the SAT.

The SAT is typically taken by high school juniors and seniors. The College Board states that the SAT is designed to measure reading, numeracy, and writing skills necessary for academic success in college. They state that the SAT rates how well test-takers analyze and solve problems - skills they learned in school and will need in college. However, the test is run under a tight time limit (expedited) to get a range of assessments.

The College Board also states that using the SAT combined with the High School Grade Point Average (GPA) is a better indicator of college success than high school grades alone, as measured by the freshman GPA. Various studies conducted over the lifetime of the SAT show a statistically significant increase in the correlation between school and student grades taking into account the SAT. The predictive validity and powers of the SAT are subjects of active research in psychometrics.

Because of US federalism, local control, and the proliferation of home, distance, and home students, there are significant differences in funding, curriculum, grading, and difficulty among US secondary schools. SAT (and ACT) scores are designed to complement the secondary school record and help admissions officers bring local data such as coursework, grades, and class rank into a national perspective.

In the past, the SAT was used more frequently by students in coastal states, and the ACT was used more frequently by students in the Midwest and South. In recent years, however, an increasing number of students on the east and west coasts have taken part in the ACT. As of 2007, all four-year colleges and universities in the United States that require a test as part of an admission application are accepting either the SAT or the ACT, and as of fall 2022, over 1,400 four-year colleges and universities will no longer require all standardized test scores for admission, though some of them apply this policy only temporarily due to the coronavirus pandemic.

structure

The two main sections of the SAT, namely Evidence-Based Reading and Writing (EBRW, usually known as the "English" part of the test) and the "Maths" section. These are both further divided into four sections: reading, writing and language, math (no calculator) and math (calculator allowed). Optionally, the test participant could also write an essay, which in this case is the fifth part of the test. The total time for the graded portion of the SAT is three hours (or three hours and fifty minutes if the optional essay section was taken). Some test takers who fail to take the essay may also have a fifth section that is used, at least in part, to pre-examine questions that may arise in future administrations of the SAT. (These questions are not included in the calculation of the SAT score.)

Two section results emerge from the SAT: Evidence-based reading and writing and mathematics. Section ratings are given on a scale of 200 to 800, and each section rating is a multiple of ten. A total score for the SAT is calculated by adding the two section ratings, resulting in total ratings ranging from 400 to 1600. In addition to the two section ratings, three "test" scores are given on a scale of 10 to 40, one for reading, writing and language and math with an increment of 1 for reading / writing and language and 0.5 for math. There are also two cross-test results, each between 10 and 40 points: Analysis in History / Social Studies and Analysis in Science. If the essay was taken, it was graded separately from the two section ratings. Two people rate each essay with 1 to 4 points each in three categories: reading, analyzing, and writing. These two ratings from the various reviewers are then combined to give an overall rating of 2 to 8 points per category. Although people sometimes cite their essay score of 24, the college board itself does not combine the different categories to get an essay score, but rather gives a score for each category.

There is no penalty or negative rating for guessing the SAT: ratings are based on the number of questions answered correctly. The optional essay will no longer be offered after the administration in June 2021. The College Board said it would discontinue the essay section because "there are other ways for students to demonstrate their essay writing skills," including the reading and writing portion of the test. It was also recognized that the COVID-19 pandemic played a role in the change and accelerated "an already ongoing process".

Reading test

The SAT reading test has a 52-question section and a 65-minute time limit. All questions are multiple choice and are based on reading passages. Tables, graphs, and diagrams can accompany some passages, but no math is required to correctly answer the relevant questions. There are five passages (up to two of which can be a pair of smaller passages) for the reading test and 10-11 questions per passage or pair of passages. SAT readings come from three main areas: history, social studies, and science. Each SAT reading test always includes: a passage from US or world literature; a passage from a U.S. constitutional document or related text; a passage on economics, psychology, sociology, or another social science; and two science passages. The answers to all questions are based only on the content stated or implied in the passage or pair of passages.

The reading test (with the writing and language test) contributes to two sub-points, each between 1 and 15 points:

  • Order of proof
  • Words in context

Writing and language tests

The SAT writing and language test consists of a section with 44 multiple-choice questions and a time limit of 35 minutes. As with the reading test, all questions are based on reading passages that can be accompanied by tables, graphics and diagrams. The test-taker is asked to read the passages and suggest corrections or improvements to the underlined content. Reading passages on this test ranges in content from subject arguments to non-fiction books on a variety of topics. The skills that are assessed include: increasing the clarity of reasoning; Improving the choice of words; Improving the analysis of social and natural science topics; Changing the structure of sentences or words to improve the organizational quality and impact of the writing; and setting or improving sentence structure, word usage and punctuation.

The writing and language test contains two sub-items, each with 1 to 15 points:

  • Expression of ideas
  • English standard conventions

mathematics

An example of a SAT math question and the correctly gridded answer.

The math part of the SAT is divided into two sections: Math Test - No Calculator and Math Test - Calculator. The SAT math test takes a total of 80 minutes and comprises 58 questions: 45 multiple-choice questions and 13 grid-in questions. The multiple choice questions have four possible answers. The grid-in questions are free answers and require the test taker to provide an answer.

  • The Math Test - No Calculator section contains 20 questions (15 multiple-choice and 5 grid-in questions) and lasts 25 minutes.
  • The Math Test - Calculator section contains 38 questions (30 multiple-choice and 8 grid-in questions) and lasts 55 minutes.

Several results are made available to the test participant for the math test. A sub-point (on a scale from 1 to 15) is given for each of the three categories of mathematical content:

  • "Heart of Algebra" (linear equations, linear systems of equations and linear functions)
  • "Problem solving and data analysis" (statistics, modeling and problem-solving skills)
  • "Passport to Advanced Math" (nonlinear expressions, radicals, exponentials, and other subjects that form the basis of more advanced math).

A math test score is given on a scale of 10 to 40 with an increment of 0.5, and a section score (equal to the test score multiplied by 20) is given on a 200 to 800 scale.

Use a calculator

All scientific and graphical calculators, including CAS (Computer Algebra System) calculators, are only allowed in the SAT Math - Calculator section. All four function pocket calculators are also acceptable. However, these devices are not recommended. All pocket calculators for cell phones and smartphones, pocket calculators with typewriter-like keyboards (QWERTZ), laptops and other portable computers, and pocket calculators that can access the Internet are not permitted.

The College Board examined the effects of using calculators on the math results of SAT I: Reasoning Test. The study found that the performance on the math section was related to the extent of calculator use: those with calculators, on about one-third to one-half of the points, averaged higher scores than those using calculators more or less frequently. However, the effect was "more the result of able students using calculators than less able students than calculators themselves". There is evidence that frequent use of a calculator in school outside of the test situation has a positive effect on test performance compared to those who do not use calculators in school.

Kind of questions

Most of the questions about the SAT, with the exception of the optional essay and math answers, are multiple choice questions. All multiple choice questions have four possible answers, one of which is correct. Thirteen of the questions on the math part of the SAT (about 22% of all math questions) are not multiple choice questions. Instead, the test participant has to blow a number in a four-column grid.

All questions on each section of the SAT are weighted equally. A raw point is added for each correct answer. No points will be deducted for incorrect answers. The final score is derived from the raw score. The exact conversion table varies between test administrations.

logistics

frequency

The SAT is offered seven times a year in the US: August, October, November, December, March, May and June. For international students, SAT is offered four times a year: in October, December, March and May (exception 2020: to cover the worldwide May cancellation, an additional September exam was introduced and August was also made available to international test participants). The test is usually offered on the first Saturday of the month for the administrations in October, November, December, May and June. The test was taken by 2,198,460 high school graduates in the 2020 class.

Candidates wishing to take the test can register online at the College Board website or by mail at least three weeks before the test date.

fees

The SAT is priced at $ 49.50 (£ 39.50, € 43.50) ($ 64.50 with the optional attachment) plus additional fees of over $ 45 when testing outside the US starting in 2019 . The College Board provides fee waivers for low-income students. Additional fees apply for late registration, standby testing, registry changes, telephone results, and additional results reports (beyond the four free ones).

Accommodation for candidates with disabilities

Students with demonstrable disabilities, including physical and learning disabilities, can participate in the SAT with accommodation. The standard time increase for students who need additional time due to learning difficulties or physical disabilities is Time + 50%. Time + 100% is also offered.

Scaled scores and percentiles

Students receive their online score reports approximately two to three weeks after the test is administered (longer for paper results sent by post). The report includes the total score (the sum of the two section ratings, with each section being rated on a 200-800 scale) and three sub-scores (each on a 2-8 scale for reading, writing, and analyzing). for the optional attachment. For an additional fee, students may be provided with various score checking services including (for select test administrations) the Question and Answer Service, which provides the test questions, the student's answers, the correct answers, and the nature and difficulty of each question.

In addition, students are given two percentile scores, each of which is defined by the College Board as the percentage of students in a comparison group with equal or lower test scores. One of the percentiles, known as the Nationally Representative Sample Percentile, uses all 11th and 12th grade students in the United States as a comparison group, whether or not they took the SAT. This percentile is theoretical and is derived using statistical inference methods. The second percentile, known as the "SAT User Percentile," uses the actual results of a comparison group of the youngest American students to take the SAT. For example, for the 2019-2020 school year, the SAT user percentile was based on the test scores of students in the final grades of 2018 and 2019 who completed the SAT (particularly the 2016 revision) during high school. Students receive both types of percentiles for both their total score and their section score.

Percentiles for the total number of points (2019)

Score, scale 400–1600 SAT users National
representative sample
1600 99+ 99+
1550 99+ 99+
1500 98 99
1450 96 99
1400 94 97
1350 91 94
1300 86 91
1250 81 86
1200 74 81
1150 67 74
1100 58 67
1050 49 58
1000 40 48
950 31 38
900 23 29
850 16 21
800 10 14
750 5 8
700 2 4
650 1 1
640–400 <1 <1

Percentiles for the total number of points (2006)

The following table summarizes the original percentiles used for the version of the SAT that was maintained from March 2005 through January 2016. These percentiles used 2006 senior students as a comparison group.

Percentile Score 400–1600 scale
(official, 2006)
Score, scale 600–2400
(official, 2006)
99,93 / 99,98 * 1600 2400
99,5 ≥1540 ≥2280
99 ≥1480 ≥2200
98 ≥1450 ≥2140
97 ≥1420 ≥2100
93 ≥1340 ≥1990
88 ≥1280 ≥1900
81 ≥1220 ≥1800
72 ≥1150 ≥1700
61 ≥1090 ≥1600
48 ≥1010 ≥1500
36 ≥950 ≥1400
24 ≥870 ≥1300
fifteen ≥810 ≥1200
8 ≥730 ≥1090
4 ≥650 ≥990
2 ≥590 ≥890
* The percentile of the perfect score was 99.98
on the 2400 scale and 99.93 on the 1600 scale.

Percentiles for the total score (1984)

Score (1984) Percentile
1600 99,9995
1550 99,983
1500 99,89
1450 99,64
1400 99.10
1350 98,14
1300 96,55
1250 94,28
1200 91.05
1150 86,93
1100 81,62
1050 75,31
1000 67,81
950 59,64
900 50,88
850 41,98
800 33.34
750 25.35
700 18.26
650 12.37
600 7.58
550 3,97
500 1.53
450 0,29
400 0,002

The version of the SAT administered before April 1995 had a very high cap. In any given year, only seven of the million test takers achieved a score above 1580. A score above 1580 was the 99.9995 percentile.

In 2015, the average score for the 2015 class was 1,490 out of a maximum of 2,400. This was a 7 point decrease from the grade of the previous class and the lowest composite score in the last decade.

SAT-ACT Score Comparisons

The College Board and ACT, Inc. conducted a joint study between September 2004 (for the ACT) or March 2005 (for the SAT) and June 2006 of students who completed both the SAT and the ACT Students who completed the SAT Occupy January 2005 and before March 2016. In May 2016, the College Board published concordance tables to reconcile the SAT values ​​used from March 2005 to January 2016 for the SAT used since March 2016, as well as tables for the conformity values ​​on the SAT used since March 2016 for the ACT.

In 2018, the College Board, in partnership with ACT, introduced a new concordance table to better compare how one student would fare on one test against another. This is now considered the official concordance for college professionals and replaces the concordance from 2016. The new concordance no longer contains the old SAT (of 2,400), but only the new SAT (of 1,600) and the ACT (of 2,600) of 36).

enlightenment

preparation

The SAT preparation was started in 1946 by Stanley Kaplan with a 64-hour course and has become an extremely lucrative area. Many companies and organizations offer test preparation in the form of books, courses, online courses, and tutoring. The exam preparation industry began almost simultaneously with the introduction of university entrance exams in the United States and has thrived from the start.

Still, the College Board believes that the SAT is essentially non-trainable, and research by the College Board and the National Association of College Admission Counseling suggests that tutoring courses average an increase of about 20 points in math and 10 points in verbal Lead area. Like IQ scores, which are a strong correlate, SAT scores tend to be stable over time, meaning that SAT prep courses offer limited benefit. An early meta-analysis (from 1983) produced similar results and found that "the size of the coaching effect estimated from the matching or randomized studies (10 points) seems too small to be practically important". Statisticians Ben Domingue and Derek C. Briggs examined data from the 2002 Education Longitudinal Survey and found that the effects of coaching were statistically significant only for math. In addition, coaching affected certain students more than others, especially those who took rigorous courses and students of high socio-economic status. In a systematic literature search from 2012, a coaching effect of 23 and 32 points for the mathematical and verbal tests was estimated. In a 2016 meta-analysis, the effect size for the verbal and mathematical sections was estimated at 0.09 and 0.16, respectively, although there was a high degree of heterogeneity. The public misunderstanding of how to prepare for the SAT continues to be exploited by the prep industry.

The College Board announced a partnership with the nonprofit Khan Academy to offer free test preparation materials starting in the 2015-16 academic year to improve the playing field for students from low-income families. Students can also bypass costly prep programs by using the cheaper official college board guide and by having solid study habits.

There is evidence that taking the PSAT at least once can help students get better at the SAT.

Forward-looking validity and powers

In 2009, education researchers Richard C. Atkinson and Saul Geiser of the University of California (UC) system argued that the high school GPA is better than the SAT when it comes to college grades regardless of type or quality Predict high school. It is the hope of some UC officials to increase the number of African and Latin American students attending, and they plan to do so by raising doubts about the SAT and reducing the number of Asian American students who are heavily represented in the SAT UC Student body (29.5%) relative to their proportion of the California population (13.6%). However, their claims about the predictive validity of the SAT have been contested by the UC Academic Senate. In its 2020 report, the UC Academic Senate found that the SAT outperformed the high school GPA in predicting first year GPA, and predicting first year GPA, first year retention, and graduation just as good as the high school GPA. It was found that this predictive validity applies to all demographic groups. A number of college boards report similar predictive validity across demographic groups.

The SAT correlates with intelligence and as such estimates individual differences. However, it has nothing to say about "effective cognitive performance" or what intelligent people do. Nor does it measure the non-cognitive traits associated with academic success, such as positive attitudes or conscientiousness. Psychometricians Thomas R. Coyle and David R. Pillow showed in 2008 that the SAT predicts college GPA even after the general factor of intelligence ( G ), with which it is highly correlated. A 2010 meta-analysis by researchers at the University of Minnesota suggested that standardized admission tests such as the SAT predicted not only the GPA for freshmen but also the GPA for the college as a whole. A 2012 study by the same university using a multi-institutional dataset found that even after checking socio-economic status and high school GPA, SAT scores were still capable of obtaining the university or college GPA degree -Predict students. A 2019 study, with a sample size of around a quarter of a million students, suggests that the SAT scores and high school GPA combined make a great predictor of freshman GPA and sophistication. In 2018, psychologists Oren R. Shewach, Kyle D. McNeal, Nathan R. Kuncel, and Paul R. Sackett showed that both high school GPA and SAT scores predict enrollment in advanced college courses, including after-tax advanced placement credits.

Education economist Jesse M. Rothstein stated in 2005 that high school average SAT scores are more predictive of freshmen's GPAs than individual SAT scores. In other words, a student's SAT scores were not as predictive of future academic achievement as their high school average. In contrast, individual high school GPAs were a better predictor of college success than the average high school GPAs. Additionally, an admissions officer who ignores the average SAT scores could overestimate the future performance of a student from a low-scoring school and underestimate that of a student from a high-scoring school.

Like other standardized tests such as the ACT or the GRE, the SAT is a traditional method of assessing the academic aptitude of students with widely differing educational experiences and, as such, focuses on the common materials that students might reasonably come across throughout their studies. Therefore, for example, the math section does not contain materials above the pre-calculus level. The psychologist Raymond Cattell called this a test for "historical" and not for "current" crystallized intelligence. Psychologist Scott Barry Kaufman further noted that the SAT can only measure a snapshot of a person's performance at any given time. Educational psychologists Jonathan Wai, David Lubinski, and Camilla Benbow found that one way to increase the predictive validity of the SAT is to assess the student's spatial reasoning skills, as the SAT does not currently include related questions. Spatial reasoning skills are important to STEM's success. Experimental psychologist Meredith Frey found that while advances in educational research and neuroscience can help improve the ability to predict school performance in the future, the SAT remains a valuable tool in the meantime.

difficulty

The SAT rigorously assesses students' mental stamina, memory, speed, accuracy, and ability to think abstractly and analytically.

In a 2012 article, educational psychologist Jonathan Wai argued that the SAT was too simple to be useful for the most competitive colleges and universities, whose applicants typically had excellent high school GPAs and standardized test scores. The admissions officers therefore had the burden of distinguishing the scorers from one another without knowing whether the students' perfect or near-perfect results really reflected their academic abilities. He suggested that the college board make the SAT more difficult, which would increase the test's upper measurement limit and allow the top schools to identify the best and brightest among applicants. At this point, the college board was already working on making the SAT harder. The changes were announced in 2014 and implemented in 2016.

After finding the June 2018 test to be easier than usual, the College Board made adjustments that resulted in lower than expected results, leading to student complaints, although some understood that this was to ensure fairness. In their analysis of the incident, the Princeton Review supported the idea of ​​bend grades, but noted that the test was unable to distinguish students in the 86th percentile (650 points) or higher in math. The Princeton Review also found this particular curve to be unusual in that it did not provide a cushion against inattentive or last-minute mistakes for high-achieving students. The review published a similar blog post for the SAT in August 2019 when a similar incident happened and the college board replied the same way, "A student who misses two questions on an easier test shouldn't do as well as one Student who missed two questions on a tough test. Equating is taking care of this issue. "It also warned students not to resume the SAT immediately as they might be disappointed again, and advised them to take a" margin "instead. before trying again.

Association with general cognitive skills

In a 2000 study, psychometrician Ann M. Gallagher and her colleagues found that only the best students used intuitive thinking to solve problems encountered in the math area of ​​the SAT.

Frey and Detterman (2004) examined associations of SAT scores with intelligence test results. Using an estimate of general mental ability or G , based on the battery for the professional fitness of the armed forces, they found that the SAT values ​​were high with G correlated (r = 0.82 in their sample, 0.857, adjusted for non-linearity) sample from a national probability survey from 1979. In addition, they examined the correlation between the SAT results using the revised and re-centered form of the test and the results of the Advanced Raven's Progressive Matrices, a test of fluid intelligence (reasoning), this time using a non-random sample. They found that the correlation of the SAT scores with the raven's Advanced Progressive Matrices scores was 0.483. They estimated that this correlation would have been around 0.72 if the skill range in the sample had not been narrowed. They also noted that there appeared to be an upper bound effect on the raven's scores, which may have suppressed the correlation. Beaujean and colleagues (2006) came to similar conclusions as Frey and Detterman. Because the SAT has a strong correlation with general intelligence, it can be used as a proxy to measure intelligence, especially when the time-consuming traditional assessment methods are not available.

Psychometrician Linda Gottfredson found that the SAT is effective in identifying intellectually gifted students who are college bound.

For decades, many critics have accused the SAT verbal designers of cultural bias as an explanation of the different scores between poorer and richer test takers, with the biggest critics coming from the University of California system. A famous example of this perceived bias in the SAT I was the rower-regatta analogy question, which is no longer part of the exam. The aim of the question was to find the pair of terms whose relationship most closely resembled the relationship between "runner" and "marathon". The correct answer was "rower" and "regatta". It was believed that choosing the correct answer required students' familiarity with rowing, a sport popular with the wealthy. For psychometricians, however, analogy questions are a useful tool for assessing students' mental abilities, because even if the meaning of two words is unclear, a student with sufficiently strong analytical thinking skills should be able to identify their relationships. Analogy questions were removed in 2005. They are replaced by questions that provide more contextual information in case students do not know the relevant definition of a word, making it easier for them to guess the correct answer.

Association with college or university majors and rankings

In 2010, University of Oregon physicists Stephen Hsu and James Schombert examined five years of student records at their school and found that the academic standing of students majoring in math or physics (but not biology, English, sociology, or history) was strong depending on the results of the SAT math. Students with an SAT math score below 600 were very unlikely to have excellent results as a math or physics major. Even so, they did not find any such patterns between the verbal SAT or the combined verbal SAT and mathematics and the other subjects mentioned above.

In 2015, Duke University educational psychologist Jonathan Wai analyzed the average test scores for the Army General Classification Test of 1946 (10,000 students), the Selective Service College Qualification Test of 1952 (38,420), and Project Talent in the early 1970s (400,000) Final exam between 2002 and 2005 (over 1.2 million) and SAT Math and Verbal in 2014 (1.6 million). Wai identified a consistent pattern: those with the highest test scores tended to choose science and engineering as their majors, while those with the lowest were more likely to choose education and agriculture. (See the picture below.)

A 2020 paper by Laura H. Gunn and her colleagues examining data from 1,389 institutions in the United States revealed strong positive correlations between the average SAT percentiles of incoming students and the proportion of graduates with a STEM and social science focus. On the other hand, they found negative correlations between the former and the proportions of graduates in psychology, theology, law enforcement, recreation, and fitness.

Various researchers have found that the average SAT or ACT scores and college ranking in the US News & World Report strongly correlate with almost 0.9. Between the 1980s and 2010s, the US population grew while universities and colleges did not expand their capacities as much. As a result, admission rates dropped significantly, which means that it has become more difficult to get admitted to a school whose alumni include parents. In addition, these days, high-profile students are much more likely to leave their hometown for further education at prestigious institutions. As a result, standardized tests like the SAT are a more reliable measure of selectivity than admission rates. When Michael J. Petrilli and Pedro Enamorado analyzed the SAT composite scores (mathematical and verbal) of the 1985 and 2016 incoming freshman classes at the top universities and liberal arts colleges in the United States, they found that the medians of the new students were The Number of institutions increased by at least 150 points, including the University of Notre-Dame (from 1290 to 1440 or 150 points) and Elon College (from 952 to 1192 or 240 points).

Association with types of schools

While there seems to be evidence that private schools tend to produce students who perform better on standardized tests such as the ACT or the SAT, Keven Duncan and Jonathan Sandy have shown that, using data from the National Longitudinal Surveys of Youth, that in student characteristics , such as: Taking into account age, race and gender (7%), family background (45%), school quality (26%) and other factors, the benefit of private schools decreased by 78%. The researchers concluded that students who attended private schools already had the traits that were associated with high scores.

Association with educational and social standpoints and results

Research on the University of California's 2001 system, which analyzed data from its students between the fall of 1996 and fall of 1999, found that the SAT II was the best predictor of college success in terms of a freshman GPA, followed by one High School GPA and finally the SAT I. After checking family income and parental upbringing, the SAT's already poor ability to measure aptitude and college readiness dropped sharply, while its more fundamental aptitude and college readiness dropped the skills of the High School GPA and the SAT II measure respectively remained undiminished (and even increased slightly). The University of California system required both SAT I and SAT II of applicants for the UC system during the four academic years of study. This analysis is widely published, but is refuted by many studies.

There is evidence that the SAT correlates with societal and educational outcomes, including completing a four-year university program. A 2012 study by psychologists at the University of Minnesota that analyzed multi-institutional data sets suggested that the SAT's ability to predict peer performance even after checking socioeconomic status (as measured by the combination of educational attainment and income of the Parents) and high school maintains GPA. This means that the SAT scores weren't just an indicator of measuring socioeconomic status, the researchers concluded. This finding has been repeated and applies to all races or ethnic groups and to both sexes. In addition, the Minnesota researchers found that the socio-economic status distributions of the student groups in the schools studied corresponded to those of their respective applicant pools. Because of what it measures, a person's SAT scores cannot be separated from their socio-economic background.

In 2007, Rebecca Zwick and Jennifer Greif Green found that a typical analysis failed to take into account the heterogeneity of high schools attended by students not only in terms of the socio-economic status of the student body, but also in terms of assessment standards. Zwick and Greif Green further showed that when these factors are taken into account, the correlation between the family's socio-economic status and grade levels and rank increased, while the correlation between socio-economic status and SAT scores decreased. They concluded that school grades and SAT scores were similarly related to family income.

According to the College Board, in 2019 56% of test takers had parents with a university degree, 27% had parents with no more than a high school diploma, and about 9% who hadn't graduated from high school. (8% did not answer the question.)

Association with family structures

One of the proposed partial explanations for the gap between Asian and Euro-American students in educational achievement, as measured by the SAT, for example, is the general tendency for Asians to come from stable two-parent households. In their 2018 analysis of data from the Bureau of Labor Statistics National Longitudinal Surveys, economists Adam Blandin, Christopher Herrington, and Aaron Steelman concluded that family structure plays an important role in determining educational outcomes in general and the SAT Values ​​in particular plays. Families with only one parent who did not have a degree were referred to as 1L, with two parents but no degree 2L, and two parents with at least one degree between them 2H. Children from 2H families had a significant advantage over children from 1L families, and this gap grew between 1990 and 2010. Because the mean SAT composite scores (verbal and mathematical) for 2H families increased by 20 points, while those from 1L families fell by one point. The distance between them increased by 21 points, or one fifth of a standard deviation.

In conversation with the Wall Street Journal family sociologist W. Bradford Wilcox said, "In the absence of SAT scores to identify children from difficult family backgrounds with great academic potential, family stability is likely to increase when it comes to determining who has it makes the college finish line in California [whose public university system has decided to no longer require SAT and ACT scores for 2020 admission]. "

Gender differences

In performance

In 2013, the American College Testing Board released a report that found that boys outperformed girls in the math department of the test. As of 2015, boys earned an average of 32 points more than girls in the SAT math section. Among those who scored in the 700 to 800 range, the male to female ratio was 1.6: 1. In 2014, psychologist Stephen Ceci and coworkers found that boys outperformed girls across the percentiles. For example, a girl who is in the top 10% of her gender would only be in the top 20% of the boys. In 2010, psychologist Jonathan Wai and colleagues showed by analyzing data spanning three decades that involved 1.6 million intellectually gifted seventh graders of Duke University's (TIP) Talent Identification Program (TIP) that in the 1980s the gender gap in the SAT math division between students below the top 0.01% was 13.5: 1 in favor of boys, but dropped to 3.8: 1 in the 1990s. The dramatic gender ratio out of the 1980s repeated another study using a sample from Johns Hopkins University. This ratio is similar to that observed for the ACT results for math and science between the early 1990s and the late 2000s. It remained largely unchanged in the late 2000s. Gender differences in the SAT math scores were already evident from 400 points.

Some researchers point to evidence of greater male variability in spatial skills and math. Greater male variability in body weight, height, and cognitive ability across cultures was noted, resulting in greater numbers of males in the lowest and highest test distribution. Consequently, a higher number of men are found in both the upper and lower extremes of the performance distributions of the mathematical sections of standardized tests such as the SAT, leading to the gender discrepancy observed. Paradoxically, this is at odds with girls' tendency to score higher in the classroom than boys.

On the flip side, Wai and colleagues found that both genders appeared to be more or less equal in terms of the verbal portion of the SAT in the top 5%, although girls gained a slight but noticeable advantage over boys who were in the SAT started in the mid-1980s. Psychologist David Lubinski, who conducted longitudinal studies on seventh grade students who scored exceptionally well on the SAT, found a similar result. Girls generally had better verbal thinking skills and boys better math skills. This reflects other studies of the general population's cognitive ability, rather than just the 95th percentile and above.

Although aspects of tests such as stereotypes are a problem, research on the predictive validity of the SAT has shown that it tends to be a more accurate predictor of female GPA in universities when compared to male GPA.

In strategy

SAT math questions can be answered intuitively or algorithmically.

Mathematical problems in SAT can be roughly divided into two groups: conventional and unconventional. Conventional problems can routinely be handled using familiar formulas or algorithms, while unconventional problems require more creative thought in order to make unusual use of known solution methods or to gain the specific knowledge needed to solve these problems. In 2000, ETS psychometrician Ann M. Gallagher and her colleagues analyzed in self-reports how students dealt with open questions about SAT math. They found that for both genders, the most popular approach was to use formulas or algorithms learned in class. However, when this failed, men were more likely than women to identify the appropriate solution methods. Previous research indicated that men are more likely to explore unusual approaches, while women are more likely to cling to what they have learned in class, and that women are more likely to identify the appropriate approaches when only mastering classroom materials is required.

In trust

In older versions of the SAT, students were asked how confident they were about their math and verbal thinking ability, particularly whether they believed they were in the top 10% or not. Devin G. Pope analyzed data from over four million test takers from the late 1990s to the early 2000s and found that high scores were more likely to be in the top 10%, with the top scorers reporting the highest levels of confidence. But there were some notable gaps between the sexes. Men were much more confident in their math skills than women. For example, among those who scored 700 in the math section, 67% of men answered that they were in the top 10%, while only 56% of women did the same. Women, on the other hand, were somewhat more confident in their ability to think verbally than men.

In glucose metabolism

Cognitive neuroscientists Richard Haier and Camilla Persson Benbow used positron emission tomography (PET) scans to study the rate of glucose metabolism in students who took the SAT. They found that in men, those with higher SAT math scores had higher rates of glucose metabolism in the temporal lobes than those with lower scores, contradicting the brain efficiency hypothesis. However, this trend was not seen in women for whom the researchers could not find cortical regions associated with mathematical reasoning. Both sexes achieved the same results on average in their sample and had the same overall rates of cortical glucose metabolism. According to Haier and Benbow, this is evidence of the structural differences in the brain between the sexes.

Association with race and ethnicity