06-13-2017, 02:57 PM
For some reason I'm unable to send you a private message:confused: but I have a statistics question for you....
SolarKat
|
06-13-2017, 02:57 PM
For some reason I'm unable to send you a private message:confused: but I have a statistics question for you....
06-13-2017, 03:22 PM
cookderosa Wrote:For some reason I'm unable to send you a private message:confused: but I have a statistics question for you.... Aw shucks, the curious (yet has a big ego) part of me wants to know what the "statistics" question is... The other part of my ego (let's call him MrFancyEgoPants) is saying "hey, i might know this"... Did you get any error messages when you tried to PM SolarKat? Did it mention the mailbox was full or something?
Study.com Offer https://bit.ly/3ObjnoU
In Progress: UMPI BAS & MAOL | TESU BA Biology & Computer Science Graduate Certificate: ASU Global Management & Entrepreneurship Completed: TESU ASNSM Biology, BSBA (ACBSP Accredited 2017) Universidad Isabel I: ENEB MBA, Big Data & BI, Digital Marketing & E-Commerce Certs: 6Sigma/Lean/Scrum, ITIL | Cisco/CompTIA/MTA | Coursera/Edx/Udacity The Basic Approach | Plans | DegreeForum Community Supported Wiki ~Note~ Read/Review forum posts & Wiki Links to Sample Degree Plans Degree Planning Advice | New To DegreeForum? How This Area Works
06-13-2017, 04:52 PM
bjcheung77 Wrote:Aw shucks, the curious (yet has a big ego) part of me wants to know what the "statistics" question is...You probably *do* know it. Right now I'm working on tying my shoe, with mixed results. Dang, it's hot. (I know, not like down south. However, I melt at about 70F. If I wanted all the fun and heat of the south, I'd move there!) My brain is a puddle of goo at present. So I suspect you're going to have the better answer, whatever the question! Lemme go see what's up with my mail.
06-13-2017, 05:31 PM
Bunny ears, Bunny ears, playing by a tree.
Criss-crossed the tree, trying to catch me. Bunny ears, Bunny ears, jumped into the hole, Popped out the other side beautiful and bold.
NanoDegree: Intro to Self-Driving Cars (2019)
Coursera: Stanford Machine Learning (2019) TESU: BA in Comp Sci (2016) TECEP:Env Ethics (2015); TESU PLA:Software Eng, Computer Arch, C++, Advanced C++, Data Struct (2015); TESU Courses:Capstone, Database Mngmnt Sys, Op Sys, Artificial Intel, Discrete Math, Intro to Portfolio Dev, Intro PLA (2014-16); DSST:Anthro, Pers Fin, Astronomy (2014); CLEP:Intro to Soc (2014); Saylor.org:Intro to Computers (2014); CC: 69 units (1980-88) PLA Tips Thread - TESU: What is in a Portfolio?
06-13-2017, 06:27 PM
davewill Wrote:Bunny ears, Bunny ears, playing by a tree. Dear God, now I have to try to tie infernal knots while reciting poetry? Where are my flip flops....
06-13-2017, 07:04 PM
SolarKat Wrote:Dear God, now I have to try to tie infernal knots while reciting poetry? Where are my flip flops.... LOL, flip flops aren't bad if you're casually lounging around. But if you're pretty active, wear something more supportive.
Study.com Offer https://bit.ly/3ObjnoU
In Progress: UMPI BAS & MAOL | TESU BA Biology & Computer Science Graduate Certificate: ASU Global Management & Entrepreneurship Completed: TESU ASNSM Biology, BSBA (ACBSP Accredited 2017) Universidad Isabel I: ENEB MBA, Big Data & BI, Digital Marketing & E-Commerce Certs: 6Sigma/Lean/Scrum, ITIL | Cisco/CompTIA/MTA | Coursera/Edx/Udacity The Basic Approach | Plans | DegreeForum Community Supported Wiki ~Note~ Read/Review forum posts & Wiki Links to Sample Degree Plans Degree Planning Advice | New To DegreeForum? How This Area Works
06-14-2017, 10:24 AM
OMG you guys are hilarious.
Before the question - google the TED talk on how to tie your shoes, you've probably been doing it wrong. (I'm serious, it's a real TED talk) Here is my stats question for ANYONE who wants to take a stab at it. I have a pea-brained size understanding of the difference between Norm-referenced standardized testing (kinda bell-curve) and Criterion-referenced standardized testing (based on actual correct). What I can't for the LIFE of me figure out, is how the heck CLEP turns the raw score into the scaled score. For years, I've been satisified with a basic level of understanding, but I'm trying to figure out if their explaination is stat-geek-speak for norm referenced? EDIT to add: I used to have a different document that was CLEARLY saying they used norm-referencing, but that's a decade old, and this is what's on their website now. Reliability The reliability of the test scores of a group of examinees is commonly described by two statistics: the reliability coefficient and the standard error of measurement (SEM). The reliability coefficient is the correlation between the scores those examinees get (or would get) on two independent replications of the measurement process. The reliability coefficient is intended to indicate the stability/consistency of the candidates’ test scores, and is often expressed as a number ranging from .00 to 1.00. A value of .00 indicates total lack of stability, while a value of 1.00 indicates perfect stability. The reliability coefficient can be interpreted as the correlation between the scores examinees would earn on two forms of the test that had no questions in common. Statisticians use an internal-consistency measure to calculate the reliability coefficients for the CLEP exam. This involves looking at the statistical relationships among responses to individual multiple-choice questions to estimate the reliability of the total test score. The formula used is known as Kuder-Richardson 20, or KR-20, which is equivalent to a more general formula called coefficient alpha. The SEM is an estimate of the amount by which a typical test taker’s score differs from the average of the scores that a test taker would have gotten on all possible editions of the test. This hypothetical average over all editions of the test is referred to as the ‘true score’. It is expressed in score units of the test. Intervals extending one standard error above and below the true score for a test taker will include 68 percent of that test taker’s obtained scores. Similarly, intervals extending two standard errors above and below the true score will include 95 percent of the test taker’s obtained scores. The SEM is inversely related to the reliability coefficient. If the reliability coefficient of the test were 1.00 (if it perfectly measured the candidate’s knowledge), the SEM would be zero. An additional index of reliability is the conditional standard of error of measurement (CSEM). Tests can be more reliable at some score levels than at other levels. That is, the reliability estimate is conditional on the score level; there are then different estimates for different score levels and these are referred to as Conditional Standard Errors of Measurement, or CSEMs. For CLEP tests, the CSEM is reported for the score level that corresponds to the recommended C-level credit-granting score. Since different editions of this exam contain different questions, a test taker’s score would not be exactly the same on all possible editions of the exam. The CSEM indicates how much those scores would vary. It is the typical distance of those scores (all for the same test taker) from their average. A test taker’s CSEM on a test cannot be computed, but by using the data from many test takers, it can be estimated. The CSEM estimate reported here is for a test taker whose average score, over all possible forms of the exam, would be equal to the recommended C-level credit granting score. Validity Validity is a characteristic of a particular use of the test scores from a group of test takers. If the scores are used to make inferences about the test takers’ knowledge of a particular subject, the validity of the scores for that purpose is the extent to which those inferences can be trusted to be accurate. One type of evidence for the validity of test scores is called content-related evidence of validity. It is usually based upon the judgments of a set of experts who evaluate the extent to which the content of the test is appropriate for the inferences to be made about the examinees' knowledge. The CLEP test development committees select the content of the tests to reflect the content of the corresponding courses at most colleges based on a curriculum survey. Because colleges differ somewhat in the content of the courses they offer, faculty members are urged to review the content outline and the sample questions to ensure that the test covers core content that corresponds with the courses at their colleges. Another type of evidence for test score validity is called criterion-related evidence of validity. It consists of statistical evidence that test takers who score high on the test also do well on other measures of the knowledge or skills the test is being used to measure. In the past, criterion-related evidence for the validity of CLEP scores has been provided by studies comparing students' CLEP scores to the grades they received in corresponding classes. Although CLEP no longer conducts these studies, individual colleges using the tests can undertake such studies in their own courses. Learn more about CLEP and ACES, a free College Board service that allows institutions to conduct these studies. https://clep.collegeboard.org/about-clep/scoring |
« Next Oldest | Next Newest »
|