The "Ask @JoeSeperac" Thread Forum

Discussions related to the bar exam are found in this forum
Forum rules
Anonymous Posting

Anonymous posting is only appropriate when you are sharing sensitive information about bar exam prep. You may anonymously respond on topic to these threads. Unacceptable uses include: harassing another user, joking around, testing the feature, or other things that are more appropriate in the lounge.

Failure to follow these rules will get you outed, warned, or banned."
User avatar
jrstephens1991

Bronze
Posts: 169
Joined: Mon Apr 20, 2009 9:58 pm

The "Ask @JoeSeperac" Thread

Post by jrstephens1991 » Wed Apr 25, 2018 2:10 pm

Can we just go ahead and make this a thing? :lol:


The guy is a genius and can answer pretty much any question that has to do with a mathmatical breakdown of you MBE and MEE scores.

@JoeSeperac do you mind?

User avatar
Neilt001

Moderator
Posts: 293
Joined: Sun Feb 04, 2018 10:35 pm

Re: The "Ask @JoeSeperac" Thread

Post by Neilt001 » Wed Apr 25, 2018 2:41 pm

jrstephens1991 wrote:Can we just go ahead and make this a thing? :lol:


The guy is a genius and can answer pretty much any question that has to do with a mathmatical breakdown of you MBE and MEE scores.

@JoeSeperac do you mind?
He is an absolute genius and a gentleman! Very generous with his time too :)

User avatar
ndbigdave

Bronze
Posts: 295
Joined: Tue Nov 24, 2015 12:25 am

Re: The "Ask @JoeSeperac" Thread

Post by ndbigdave » Wed Apr 25, 2018 3:05 pm

jrstephens1991 wrote:Can we just go ahead and make this a thing? :lol:


The guy is a genius and can answer pretty much any question that has to do with a mathmatical breakdown of you MBE and MEE scores.

@JoeSeperac do you mind?

He is the man, I pride myself on knowledge and breaking down statistics, but Joe takes it to the pro level (meanwhile I am toiing somewhere in the minors).

His knowledge and adivice, supported by years of research and statistics is as good as it gets.

JoeSeperac

Moderator
Posts: 507
Joined: Thu Feb 16, 2017 3:30 pm

Re: The "Ask @JoeSeperac" Thread

Post by JoeSeperac » Wed Apr 25, 2018 3:18 pm

Thanks for the kind words. Feel free to ask away.

User avatar
bretby

Bronze
Posts: 452
Joined: Thu Oct 30, 2014 5:15 pm

Re: The "Ask @JoeSeperac" Thread

Post by bretby » Wed Apr 25, 2018 3:39 pm

Thanks, Joe! I'm just curious where the following scores put me for the NY bar: total 325; MBE: 167.2. I thought I had done awesome on the essays and was uncertain about the multiple choice, but it looks like maybe I had it backwards?

Want to continue reading?

Register now to search topics and post comments!

Absolutely FREE!


JoeSeperac

Moderator
Posts: 507
Joined: Thu Feb 16, 2017 3:30 pm

Re: The "Ask @JoeSeperac" Thread

Post by JoeSeperac » Wed Apr 25, 2018 4:33 pm

bretby wrote:Thanks, Joe! I'm just curious where the following scores put me for the NY bar: total 325; MBE: 167.2. I thought I had done awesome on the essays and was uncertain about the multiple choice, but it looks like maybe I had it backwards?
Nice job. You are just two points below the current TLS top score of 327. Based on your scaled MBE score of 167.2, your estimated raw MBE score was about 150/175 correct. This means you answered about 86% of the graded MBE questions correctly. Based on the F16 national statistics on the MBE (which serve as a good predictor of this exam’s percentiles), this places you in the 98.6% percentile for the MBE. This means that 1.4% of Feb examinees nationwide did better than you on the MBE based on your scaled MBE score of 167.2. Based on a total score of 325, this means your written score was 157.8. Assuming that the MEE/MPT percentiles follow the national MBE statistics, scoring a 157.8 Scaled MEE/MPT score would have placed you in the 93.8% percentile among examinees nationwide (meaning that only 6.2% of examinees nationwide would have scored better than you on the MEE/MPT).

Generally, there is a pretty good correlation between written scores and MBE scores. For example, take a look at these statistics:
https://www.mble.org/past-exam-performance-statistics

In this Missouri data, notice how closely the average MBE tracks to the average written. For example, in looking for a small sample, I saw that in J07, there were 7 examinees who took the exam 4+ times. Their average MBE was 116.86 versus an average written of 116.43. I see this all the time - some examinees score insanely high on the MBE (170+) and then also get a written of 170+ even though they felt their written effort was just OK. This is why I tell examinees to focus on the MBE and take calculated risks with the MEE/MPT. If examinees fail, it is almost always because their MBE was below 140.

b290

Bronze
Posts: 348
Joined: Mon Oct 23, 2017 5:28 pm

Re: The "Ask @JoeSeperac" Thread

Post by b290 » Wed Apr 25, 2018 5:48 pm

OP read my mind. Glad to see those from elsewhere seeing what we in NY already knew (even pre-UBE). Keep it up Joe :D

My $.02

nada123

New
Posts: 63
Joined: Thu Aug 10, 2017 3:13 am

Re: The "Ask @JoeSeperac" Thread

Post by nada123 » Wed Apr 25, 2018 7:06 pm

Hi Joe, can you work your magic with my scores? :D

Overall 273, MBE 139.7 (New York).

Thanks!!

Nightcrawler

Bronze
Posts: 222
Joined: Wed Apr 04, 2018 12:02 pm

Re: The "Ask @JoeSeperac" Thread

Post by Nightcrawler » Wed Apr 25, 2018 7:23 pm

Question for Joe.

First of all, thanks to you I finally understood why a worse pool of applicants results in a lower scale (even if it's counter-intuitive to whoever was used to the good ol' grading curve in law school).

With that said, what can you tell about the grading calibration sessions? In particular, I read on the Cal Bar website that they meet, analyze some written answers of candidates, decide a grading grid, and repeat this process a few times to make sure that their grading grid is uniform.

So here is my specific question: doesn't this method result in a "less demanding" grid during an examination with a worse pool of applicants (such as February), therefore making better applicants score more points according to that grid (compared to when the same "better applicant" sits in July, where the grid is done looking at better averaging answers)?

Thanks in advance!

Want to continue reading?

Register for access!

Did I mention it was FREE ?


JoeSeperac

Moderator
Posts: 507
Joined: Thu Feb 16, 2017 3:30 pm

Re: The "Ask @JoeSeperac" Thread

Post by JoeSeperac » Wed Apr 25, 2018 7:44 pm

nada123 wrote:Hi Joe, can you work your magic with my scores? :D Overall 273, MBE 139.7 (New York). Thanks!!
Based on your scaled MBE score of 139.7, your estimated raw MBE score was about 114/175 correct. This means you answered about 65% of the graded MBE questions correctly. This places you in the 64% percentile for the MBE. This means that 36% of Feb examinees nationwide did better than you on the MBE based on your scaled MBE score of 139.7. Based on a total score of 273, this means your written score was 133.3 which would have placed you in the 46.6% percentile among examinees nationwide (meaning that 53.4% of examinees nationwide would have scored better than you on the MEE/MPT).

JoeSeperac

Moderator
Posts: 507
Joined: Thu Feb 16, 2017 3:30 pm

Re: The "Ask @JoeSeperac" Thread

Post by JoeSeperac » Wed Apr 25, 2018 8:02 pm

Nightcrawler wrote:Question for Joe.

First of all, thanks to you I finally understood why a worse pool of applicants results in a lower scale (even if it's counter-intuitive to whoever was used to the good ol' grading curve in law school).

With that said, what can you tell about the grading calibration sessions? In particular, I read on the Cal Bar website that they meet, analyze some written answers of candidates, decide a grading grid, and repeat this process a few times to make sure that their grading grid is uniform.

So here is my specific question: doesn't this method result in a "less demanding" grid during an examination with a worse pool of applicants (such as February), therefore making better applicants score more points according to that grid (compared to when the same "better applicant" sits in July, where the grid is done looking at better averaging answers)?

Thanks in advance!
Be careful, if you get me started on this, I may never stop. I have looked at thousands of graded essays/MPTs and I sometimes can't understand how they are scored. I study them in all sorts of ways, even breaking them down into keyword analysis. For example, please take a look at the following:
https://seperac.com/pdf/J14-Essay%20Ana ... ay%201.pdf

It contains obvious and serious mistakes in J14 essay grading. You will need to Zoom in on this PDF to read the material (I try to put a number of essays on one page so it can be visually compared). This PDF is a small sample of 15 answers from Essay 1 from the July 2014 exam. As part of this Essay Analysis, I try to determine the weight of each issue and then I calculate each examinee’s score for each issue (for example, PROF-RES: Solicitation/Referral Fees (Seperac Est. score of 2/10)). The final result is the “Seperac Estimated Score.” Bar graders have neither the time or the interest to put similarly scored essays side by side to see if the grading is indeed accurate. However, when I do this, grading inaccuracies often come to light. For example, if you look at the 5th essay (Jul2014-Essay-001-ID 002-Typed-Score 38.66), this “Examinee J” received a score of 38.66. If you compare this essay to the other essays that scored around 38.66, you will see that this essay is far superior. I feel this essay score was severely discounted – just compare this essay to the released Model Answers and you will see what I mean. How this essay is not a passing essay is a complete mystery to me.

The biggest reason for unreliability in essay grading is multiple graders. In small states, this is not an issue – one grader will handle all the essays for a question. This makes the appraisal more accurate, but still subject to the whims of a subjective grader. NCBE tells the graders to put the essays in “buckets” so there is a 1 bucket, a 2 bucket, etc and to make sure there is an even distribution. Thus, if you are competing against strong essay writers, you will have a hard time. However, I still believe issue-spotting is paramount. If the graders are using the NCBE Scoring Analysis (which they should be using), then if you spot the issue, you have to receive credit for it. If all the examinees spot all the issues, then you will still have a problem because their writing is likely stronger than yours, but if they do not, you can do OK on the essays.

For example, the following was a passing J16 essay: http://seperac.com/pdf/Jul2016-Essay%201-49.46.pdf

Although poorly written, it spotted all the issues. In looking at other examinee essays, many examinees failed to spot the dissociation issue. I feel that when the grader saw that this examinee spotted every issue, including the one no one spotted, I think the grader felt the essay had to be regarded as a passing essay.

According to a 1977 study entitled An Analysis Of Grading Practices On The California Bar Examination by Stephen P. Klein, Ph.D., the "grading standards for the California bar exam essays are based on: analysis of the problem, knowledge of the law, application of legal principles, reasoning, and the appropriateness of the conclusions reached. The objective "correctness" of the answer are not supposed to affect the grade assigned." Bar exam graders have to be trained to apply these scoring rules consistently through a process called "calibration." According to NCBE's digest called The Bar Examiner:

The MBE has a reliability of about 0.9. The reliability of the MBE varies a little from administration to administration (from about 0.89 to 0.91) but is consistently high enough to meet the reliability requirement by itself. The reliability of the written component is generally lower and more variable than the reliability of the MBE. Assuming that the written component includes 6 to 10 tasks (including essay questions and performance tasks), that the candidate responses to each essay question and/or performance task are graded by a single grader (or a set of calibrated graders who have been trained to apply the scoring rules consistently), and that the overall written component score is the sum or average of the scores on the individual tasks, the reliability will tend to be about 0.7. So, the written components of most bar examinations are not reliable enough in themselves to meet the rule of thumb of 0.8 or 0.9, but when combined appropriately with the MBE, the overall score tends to have a reliability higher than 0.9.
The Bar Examiner: Volume 78, Number 4, November 2009

It can be inferred from this article that the reliability of the essays, while already lower than the MBE, will diminish further if the graders are not sufficiently trained to apply the scoring rules consistently. This was confimed in the 1977 Klein study:

"there was far more consistency among the readers before the regular reading process began (calibration data set) than there was once this process was underway. This difference is evident on all three indices of agreement and clearly illustrates that the initial calibration data does not reflect accurately the degree of agreement among, the readers in the scores that are subsequently used in determining an applicant's pass/fail status. For instance, with the calibration sample there was a range of 70-85 percent agreement on the pass/fail decision~ whereas this range dropped to 27-57 percent at the beginning and to 23-53 percent at the end of the regular reading period. In other words, during the normal reading process, the readers agreed with one another about one-half as well as they did during the calibration process!"

Basically, at the beginning of the grading process (immediately after calibration), the graders were most likely to be consistent (with the highest consistency being 85%). At the end of the grading process, the graders were least likely to be consistent (with the highest consistency being 53% and the lowest consistency being 25%). At the time of this study in 1977, the California bar exam graders convened just once to calibrate the essays. Currently, the California bar exam graders convene three times to "calibrate." (see The State Bar Of California Committee Of Bar Examiners/Office Of Admissions Description And Grading Of The California Bar Examination – General Bar Examination And Attorneys' Examination http://admissions.calbar.ca.gov/LinkCli ... iqbATHUwY=)

However, despite currrently convening three times to calibrate, the California bar exam essay grading is still unreliable. Sometimes California bar exam failers send me their score sheets to review. On the California bar exam, if an examinee scores above a 1390 but below a 1440 passing score, the examinee's essays and PTs are re-read to ensure accuracy and both read scores are reported and averaged. I have seen instances where there is a 20 point difference in an essay score between re-reads.

Meanwhile, in NY, the NY graders convene only once to "calibrate." In a March 2011 discussion at New York Law School, Bryan R. Williams of the NYS Board of Law Examiners stated:

"The grading of the exam is done by seven people throughout state - all practicing lawyers. There are five board members of the NY Board of Law Examiners who are appointed by the Court of Appeals. Each one of those board members have seven people who are in their team. Each person is responsible for one essay, and that team of people, then they grade the essay and the MPT. So what happens is we have the question written, and then we have a model answer. And just like this exam that was just given, a few days after the exam, all of us, the seven graders and myself, will receive about 50 sample answers given by candidates, so we all get the same 50, and we individually go and we grade those exams based upon the model answer that they did, and then we have a meeting and we come together and we make sure that we are all grading the same way, so we can get calibrated, and there has never been a time since I've been doing this, and I've been doing this since 1986, there has never been a time where we would have had that meeting and because of the kinds of answers we get back, we don't in some way change our model answer because what we are trying to do, we are trying to rank order people."
See 2011 NYLS Bar Kickoff video @ 13:15-14:40

Think about this for a minute. California does more than New York to ensure their essay grading is reliable (e.g. commissioning studies on the reliability of their essay grading, having graders convene three times for calibration, scoring essays in 5 point increments), yet California essay scores can still experience 20 point swings between different graders. Image what point swings can occur with New York essays! This is why I tell examinees to focus on the MBE and take calculated risks on the MEE/MPT. On the essays, you can do everything right (study the right material, answer the questions properly, etc), but you are statistically less likely to get the score you deserve, as compared to the MBE.

PaddyO424

New
Posts: 4
Joined: Tue Mar 24, 2015 9:54 am

Re: The "Ask @JoeSeperac" Thread

Post by PaddyO424 » Wed Apr 25, 2018 8:16 pm

Thanks for doing this Joe. If you have a minute, can you work your math voodoo with my score? I got a 299 with scaled MBE score of 142.0. I completed over 2000 MBE questions and completed about 80% of the Themis program.

Thank you sir!

JoeSeperac

Moderator
Posts: 507
Joined: Thu Feb 16, 2017 3:30 pm

Re: The "Ask @JoeSeperac" Thread

Post by JoeSeperac » Wed Apr 25, 2018 9:44 pm

PaddyO424 wrote:Thanks for doing this Joe. If you have a minute, can you work your math voodoo with my score? I got a 299 with scaled MBE score of 142.0. I completed over 2000 MBE questions and completed about 80% of the Themis program.

Thank you sir!
Based on your scaled MBE score of 142, your estimated raw MBE score was about 117/175 correct. This means you answered about 67% of the graded MBE questions correctly. This places you in the 69% percentile for the MBE. This means that 31% of Feb examinees nationwide did better than you on the MBE based on your scaled MBE score of 142. Based on a total score of 299, this means your written score was 157. Assuming that the MEE/MPT percentiles follow the national MBE statistics, scoring a 157 Scaled MEE/MPT score would have placed you in the 93.1% percentile among examinees nationwide (meaning that 6.9% of examinees nationwide would have scored better than you on the MEE/MPT).

In doing 2k Qs, your MBE practice knowledge probably carried over to the MEE. Usually, at least a few MEE topics seem to derive from MBE practice Qs.

Register now!

Resources to assist law school applicants, students & graduates.

It's still FREE!


kc128

Bronze
Posts: 132
Joined: Tue Oct 17, 2017 4:13 pm

Re: The "Ask @JoeSeperac" Thread

Post by kc128 » Wed Apr 25, 2018 10:26 pm

Jumping on this too... 296 overall UBE, 149 MBE... Thank you!

JoeSeperac

Moderator
Posts: 507
Joined: Thu Feb 16, 2017 3:30 pm

Re: The "Ask @JoeSeperac" Thread

Post by JoeSeperac » Wed Apr 25, 2018 11:59 pm

kc128 wrote:Jumping on this too... 296 overall UBE, 149 MBE... Thank you!
Based on your scaled MBE score of 149, your estimated raw MBE score was about 126/175 correct (based on F13 scale). This means you answered about 72% of the graded MBE questions correctly. This places you in the 83.4% percentile for the MBE. This means that 16.6% of Feb examinees nationwide did better than you on the MBE based on your scaled MBE score of 149 (based on F16 data). Based on a total score of 296, this means your written score was 147. This would have placed you in the 79.3% percentile among examinees nationwide (meaning that 20.7% of examinees nationwide would have scored better than you on the MEE/MPT).

onionhead

New
Posts: 15
Joined: Tue Apr 24, 2018 11:45 pm

Re: The "Ask @JoeSeperac" Thread

Post by onionhead » Thu Apr 26, 2018 3:02 am

hey joe,

If you dont mind. grateful for your insight. got a score of 300 and MBE 140.5.

thanks!

JoeSeperac

Moderator
Posts: 507
Joined: Thu Feb 16, 2017 3:30 pm

Re: The "Ask @JoeSeperac" Thread

Post by JoeSeperac » Thu Apr 26, 2018 8:57 am

onionhead wrote:hey joe,

If you dont mind. grateful for your insight. got a score of 300 and MBE 140.5.

thanks!
Based on your scaled MBE score of 140.5, your estimated raw MBE score was about 115/175 correct (based on F13 scale). This means you answered about 66% of the graded MBE questions correctly. This places you in the 65.8% percentile for the MBE. This means that 34.2% of Feb examinees nationwide did better than you on the MBE based on your scaled MBE score of 140.5 (based on F16 data). Based on a total score of 300, this means you bowled a perfect game. It also means your written score was 159.5 which would have placed you in the 95.2% percentile among examinees nationwide (meaning that 4.8% of examinees nationwide would have scored better than you on the MEE/MPT).

Get unlimited access to all forums and topics

Register now!

I'm pretty sure I told you it's FREE...


rNadOm

New
Posts: 3
Joined: Thu Apr 26, 2018 3:45 pm

Re: The "Ask @JoeSeperac" Thread

Post by rNadOm » Thu Apr 26, 2018 3:50 pm

Hi Joe - could you please help me calculate my stats as well? 343 total and 170.2 scaled MBE (New York).

Thanks in advance !

Nightcrawler

Bronze
Posts: 222
Joined: Wed Apr 04, 2018 12:02 pm

Re: The "Ask @JoeSeperac" Thread

Post by Nightcrawler » Thu Apr 26, 2018 3:56 pm

JoeSeperac wrote:
Nightcrawler wrote:Question for Joe.

First of all, thanks to you I finally understood why a worse pool of applicants results in a lower scale (even if it's counter-intuitive to whoever was used to the good ol' grading curve in law school).

With that said, what can you tell about the grading calibration sessions? In particular, I read on the Cal Bar website that they meet, analyze some written answers of candidates, decide a grading grid, and repeat this process a few times to make sure that their grading grid is uniform.

So here is my specific question: doesn't this method result in a "less demanding" grid during an examination with a worse pool of applicants (such as February), therefore making better applicants score more points according to that grid (compared to when the same "better applicant" sits in July, where the grid is done looking at better averaging answers)?

Thanks in advance!
Be careful, if you get me started on this, I may never stop. I have looked at thousands of graded essays/MPTs and I sometimes can't understand how they are scored. I study them in all sorts of ways, even breaking them down into keyword analysis. For example, please take a look at the following:
https://seperac.com/pdf/J14-Essay%20Ana ... ay%201.pdf

It contains obvious and serious mistakes in J14 essay grading. You will need to Zoom in on this PDF to read the material (I try to put a number of essays on one page so it can be visually compared). This PDF is a small sample of 15 answers from Essay 1 from the July 2014 exam. As part of this Essay Analysis, I try to determine the weight of each issue and then I calculate each examinee’s score for each issue (for example, PROF-RES: Solicitation/Referral Fees (Seperac Est. score of 2/10)). The final result is the “Seperac Estimated Score.” Bar graders have neither the time or the interest to put similarly scored essays side by side to see if the grading is indeed accurate. However, when I do this, grading inaccuracies often come to light. For example, if you look at the 5th essay (Jul2014-Essay-001-ID 002-Typed-Score 38.66), this “Examinee J” received a score of 38.66. If you compare this essay to the other essays that scored around 38.66, you will see that this essay is far superior. I feel this essay score was severely discounted – just compare this essay to the released Model Answers and you will see what I mean. How this essay is not a passing essay is a complete mystery to me.

The biggest reason for unreliability in essay grading is multiple graders. In small states, this is not an issue – one grader will handle all the essays for a question. This makes the appraisal more accurate, but still subject to the whims of a subjective grader. NCBE tells the graders to put the essays in “buckets” so there is a 1 bucket, a 2 bucket, etc and to make sure there is an even distribution. Thus, if you are competing against strong essay writers, you will have a hard time. However, I still believe issue-spotting is paramount. If the graders are using the NCBE Scoring Analysis (which they should be using), then if you spot the issue, you have to receive credit for it. If all the examinees spot all the issues, then you will still have a problem because their writing is likely stronger than yours, but if they do not, you can do OK on the essays.

For example, the following was a passing J16 essay: http://seperac.com/pdf/Jul2016-Essay%201-49.46.pdf

Although poorly written, it spotted all the issues. In looking at other examinee essays, many examinees failed to spot the dissociation issue. I feel that when the grader saw that this examinee spotted every issue, including the one no one spotted, I think the grader felt the essay had to be regarded as a passing essay.

According to a 1977 study entitled An Analysis Of Grading Practices On The California Bar Examination by Stephen P. Klein, Ph.D., the "grading standards for the California bar exam essays are based on: analysis of the problem, knowledge of the law, application of legal principles, reasoning, and the appropriateness of the conclusions reached. The objective "correctness" of the answer are not supposed to affect the grade assigned." Bar exam graders have to be trained to apply these scoring rules consistently through a process called "calibration." According to NCBE's digest called The Bar Examiner:

The MBE has a reliability of about 0.9. The reliability of the MBE varies a little from administration to administration (from about 0.89 to 0.91) but is consistently high enough to meet the reliability requirement by itself. The reliability of the written component is generally lower and more variable than the reliability of the MBE. Assuming that the written component includes 6 to 10 tasks (including essay questions and performance tasks), that the candidate responses to each essay question and/or performance task are graded by a single grader (or a set of calibrated graders who have been trained to apply the scoring rules consistently), and that the overall written component score is the sum or average of the scores on the individual tasks, the reliability will tend to be about 0.7. So, the written components of most bar examinations are not reliable enough in themselves to meet the rule of thumb of 0.8 or 0.9, but when combined appropriately with the MBE, the overall score tends to have a reliability higher than 0.9.
The Bar Examiner: Volume 78, Number 4, November 2009

It can be inferred from this article that the reliability of the essays, while already lower than the MBE, will diminish further if the graders are not sufficiently trained to apply the scoring rules consistently. This was confimed in the 1977 Klein study:

"there was far more consistency among the readers before the regular reading process began (calibration data set) than there was once this process was underway. This difference is evident on all three indices of agreement and clearly illustrates that the initial calibration data does not reflect accurately the degree of agreement among, the readers in the scores that are subsequently used in determining an applicant's pass/fail status. For instance, with the calibration sample there was a range of 70-85 percent agreement on the pass/fail decision~ whereas this range dropped to 27-57 percent at the beginning and to 23-53 percent at the end of the regular reading period. In other words, during the normal reading process, the readers agreed with one another about one-half as well as they did during the calibration process!"

Basically, at the beginning of the grading process (immediately after calibration), the graders were most likely to be consistent (with the highest consistency being 85%). At the end of the grading process, the graders were least likely to be consistent (with the highest consistency being 53% and the lowest consistency being 25%). At the time of this study in 1977, the California bar exam graders convened just once to calibrate the essays. Currently, the California bar exam graders convene three times to "calibrate." (see The State Bar Of California Committee Of Bar Examiners/Office Of Admissions Description And Grading Of The California Bar Examination – General Bar Examination And Attorneys' Examination http://admissions.calbar.ca.gov/LinkCli ... iqbATHUwY=)

However, despite currrently convening three times to calibrate, the California bar exam essay grading is still unreliable. Sometimes California bar exam failers send me their score sheets to review. On the California bar exam, if an examinee scores above a 1390 but below a 1440 passing score, the examinee's essays and PTs are re-read to ensure accuracy and both read scores are reported and averaged. I have seen instances where there is a 20 point difference in an essay score between re-reads.

Meanwhile, in NY, the NY graders convene only once to "calibrate." In a March 2011 discussion at New York Law School, Bryan R. Williams of the NYS Board of Law Examiners stated:

"The grading of the exam is done by seven people throughout state - all practicing lawyers. There are five board members of the NY Board of Law Examiners who are appointed by the Court of Appeals. Each one of those board members have seven people who are in their team. Each person is responsible for one essay, and that team of people, then they grade the essay and the MPT. So what happens is we have the question written, and then we have a model answer. And just like this exam that was just given, a few days after the exam, all of us, the seven graders and myself, will receive about 50 sample answers given by candidates, so we all get the same 50, and we individually go and we grade those exams based upon the model answer that they did, and then we have a meeting and we come together and we make sure that we are all grading the same way, so we can get calibrated, and there has never been a time since I've been doing this, and I've been doing this since 1986, there has never been a time where we would have had that meeting and because of the kinds of answers we get back, we don't in some way change our model answer because what we are trying to do, we are trying to rank order people."
See 2011 NYLS Bar Kickoff video @ 13:15-14:40

Think about this for a minute. California does more than New York to ensure their essay grading is reliable (e.g. commissioning studies on the reliability of their essay grading, having graders convene three times for calibration, scoring essays in 5 point increments), yet California essay scores can still experience 20 point swings between different graders. Image what point swings can occur with New York essays! This is why I tell examinees to focus on the MBE and take calculated risks on the MEE/MPT. On the essays, you can do everything right (study the right material, answer the questions properly, etc), but you are statistically less likely to get the score you deserve, as compared to the MBE.
Oh wow, thank you for all the details! So one of the possible outcomes of this is that, unlike for the MBE, February could be a little more forgiving for the written part compared to July IF their grading wasn't as subjective (but it is, so focus on the MBE is more efficient for us). Very useful insights.

byag

New
Posts: 13
Joined: Tue Dec 26, 2017 5:49 pm

Re: The "Ask @JoeSeperac" Thread

Post by byag » Thu Apr 26, 2018 4:33 pm

Hi Joe @JoeSeperac,

I took the February 2018 NY UBE exam and received a 162.6 written and a 134.4 MBE. Can you please tell me where my scores rank? Your website was extremely useful to me; due in part to your analysis (and other resources) I was able to increase my total UBE score from 247 (July '17) to 297. I am now preparing for the FL exam and need to raise my MBE score to FL's required 136 (missed it by 1.6!!) so I look forward to using your website again in the coming months. Thanks!

pech71

New
Posts: 98
Joined: Mon Oct 28, 2013 11:11 am

Re: The "Ask @JoeSeperac" Thread

Post by pech71 » Thu Apr 26, 2018 4:49 pm

byag wrote:Hi Joe @JoeSeperac,

I took the February 2018 NY UBE exam and received a 162.6 written and a 134.4 MBE. Can you please tell me where my scores rank? Your website was extremely useful to me; due in part to your analysis (and other resources) I was able to increase my total UBE score from 247 (July '17) to 297. I am now preparing for the FL exam and need to raise my MBE score to FL's required 136 (missed it by 1.6!!) so I look forward to using your website again in the coming months. Thanks!

Is it just me or do so many people this time around have really high essay scores compared to MBE scores? I have seen many people post @joe for analysis and their essays be 15-20 sometimes 30 points higher than MBE.

Did you score high on essays or MPT? Makes me think there is something in a question or two where some people just got it correct (like civ pro sanctions rule, property, and crim law fit for trial rule) and shot them ahead of curve.

Communicate now with those who not only know what a legal education is, but can offer you worthy advice and commentary as you complete the three most educational, yet challenging years of your law related post graduate life.

Register now, it's still FREE!


JoeSeperac

Moderator
Posts: 507
Joined: Thu Feb 16, 2017 3:30 pm

Re: The "Ask @JoeSeperac" Thread

Post by JoeSeperac » Thu Apr 26, 2018 5:02 pm

rNadOm wrote:Hi Joe - could you please help me calculate my stats as well? 343 total and 170.2 scaled MBE (New York).

Thanks in advance !
Based on your scaled MBE score of 170.2, your estimated raw MBE score was about 154/175 correct (based on F13 scale). This means you answered about 88% of the graded MBE questions correctly. This places you in the 99.3% percentile for the MBE. This means that only 0.7% of Feb examinees nationwide did better than you on the MBE based on your scaled MBE score of 170.2 (based on F16 data). Based on a total score of 343, this means your written score was 172.8 which would have placed you in the 99.6% percentile among examinees nationwide (meaning that 0.4% of examinees nationwide would have scored better than you on the MEE/MPT).

You basically crushed the exam. The best you could have possibly added 10 points to each of your MBE and MEE/MPT components to end up with about a 363 which would put you 100th percentile nationwide. So if you studied just a little bit more, you might have achieved total domination.

rNadOm

New
Posts: 3
Joined: Thu Apr 26, 2018 3:45 pm

Re: The "Ask @JoeSeperac" Thread

Post by rNadOm » Thu Apr 26, 2018 6:34 pm

JoeSeperac wrote:
rNadOm wrote:Hi Joe - could you please help me calculate my stats as well? 343 total and 170.2 scaled MBE (New York).

Thanks in advance !
Based on your scaled MBE score of 170.2, your estimated raw MBE score was about 154/175 correct (based on F13 scale). This means you answered about 88% of the graded MBE questions correctly. This places you in the 99.3% percentile for the MBE. This means that only 0.7% of Feb examinees nationwide did better than you on the MBE based on your scaled MBE score of 170.2 (based on F16 data). Based on a total score of 343, this means your written score was 172.8 which would have placed you in the 99.6% percentile among examinees nationwide (meaning that 0.4% of examinees nationwide would have scored better than you on the MEE/MPT).

You basically crushed the exam. The best you could have possibly added 10 points to each of your MBE and MEE/MPT components to end up with about a 363 which would put you 100th percentile nationwide. So if you studied just a little bit more, you might have achieved total domination.
Awesome - thank you! Total domination shall have to wait for another standardized test it seems

hope2018

New
Posts: 29
Joined: Thu Mar 01, 2018 5:57 pm

Re: The "Ask @JoeSeperac" Thread

Post by hope2018 » Thu Apr 26, 2018 7:10 pm

JoeSeperac, You are awesome!!! :D
I wonder if you will be able to run the same formula for Ca examiners whenever they release the results (May 18th)?

User avatar
Respondeat_Inferior

New
Posts: 51
Joined: Sun Dec 03, 2017 4:03 am

Re: The "Ask @JoeSeperac" Thread

Post by Respondeat_Inferior » Thu Apr 26, 2018 7:51 pm

Hi Joe,

1. Do you have any hobbies or anything you like to do in your off-time in particular?

2. In your opinion, what is the best restaurant you've eaten at?

3. Plain-toe, cap-toe, or wingtip?

Seriously? What are you waiting for?

Now there's a charge.
Just kidding ... it's still FREE!


Post Reply Post Anonymous Reply  

Return to “Bar Exam Prep and Discussion Forum”