•  •  Dark Mode

Your Interests & Preferences

I am a...

law firm lawyer
in-house company lawyer
litigation lawyer
law student
aspiring student
other

Website Look & Feel

 •  •  Dark Mode
Blog Layout

Save preferences

CLAT technical UX issues cause flood of complaints: Should consortium act or were rules clear? • Full audit trail by 3 Oct

The two arrows point to two parts of the interface that may have tripped up candidates (screenshot from the CLAT mock exam)
The two arrows point to two parts of the interface that may have tripped up candidates (screenshot from the CLAT mock exam)

Potential user experience (UX) issues in the design of the computerised Common Law Admission Test (CLAT) exam 2020 may have caused issues for potentially hundreds of candidates or more.

A flood of vocal complaints from candidates on social media today (1 October) - after provisional scores were released to candidates the previous day (30 September) - had alleged that their scores did not correspond to the questions they had actually answered, with some claiming a discrepancy of 30 or more marks.

Many also claimed that they had received zero marks for many questions that they had answered (correctly).

We have also spoken to more around a dozen candidates and while it is impossible to authoritatively confirm at this point whether this applies to all candidates’ complaints of technical issues, it appears that many of the complaints were caused by the exam’s potentially counter-intuitive and non-standard system of implementing the “mark for review” feature.

In the provisional score software that had been made accessible to candidates, this might have indicated that a candidates had “chosen option” but the status was “Marked For Review” and they received 0 marks for the question (see screenshot below).

Marked for review = 0 points in CLAT 2020
Marked for review = 0 points in CLAT 2020

We had first reported the potential issue with 'mark for review' on the day of the exam (28 September).

We had written back then:

Apparently, the exam’s instructions today explicitly noted that any questions “marked for review”, in order to bookmark these and return to them later, would not be counted (even if the candidate had given an answer before clicking the ‘mark for review & next question’ button).

Furthermore, we understand that when returning to answers that had been ‘marked for review’, candidates were also required to first click a button marked ‘clear response’ before clicking on the new answer and submitting it.

If ‘clear response’ was not clicked before submitting the new answer, the system may not have registered the new answer in the way that had arguably been intended by the candidate.

We have tried out the mock test just now, which did appear to accept (and remember) new answers to questions ‘marked for review’, even without pressing the ‘clear response’ button.

We have not been able to confirm whether that was communicated or clarified on screen differently in the final exam.

Update 01:06: As pointed out in a comment below and by several candidates to us directly, it is possible that once you have clicked on an answer and then immediately clicked on another answer (without submitting the answer and without clicking ‘mark for review’), the software may have counter-intuitively frozen the first answer as the one that counts (unless you first clicked ‘clear response’). The instructions are not 100% clear on this (see below), and we have not been able to independently confirm that this is the way the software worked, but have reached out to the CLAT for comment.

Online vs pen and paper

But there is a case to be made, from a user and usability perspective, that the system should also have accepted a click on the big blue “Save & Next” button instead of first requiring a click on a “Clear Response” button.

Likewise, it might be reasonable to assume that a candidate takes a good stab at an answer, saves it and clicks ‘Mark for Review’, hoping to get back to the question later to double check it, and then runs out of time.

Much as for a traditional pen and paper exam, the candidate might reasonably have thought that at least their original answer should count.

In fact, if you used the ‘mark for review’ button as intended, it might be a severe detriment: considering the time pressure, you are unlikely to have the time to actually be able to go back to more than a few questions, since unlike a pen and paper exam that you can quickly skim, going back to each question entails a click on some small icons, which could take a not negligible amount of extra time).

A question of instructions?

Excerpt from the three pages of mock exam instructions (yellow highlighting ours)
Excerpt from the three pages of mock exam instructions (yellow highlighting ours)

While we do not have a copy of the instructions provided to candidates in the exam, we have been able to confirm the instructions provided to candidates in the mock exam (see a full PDF of the instructions below, coming to around three pages of text).

Update 11:04: We have been told from an authoritative source close to the CLAT that the instructions in the final exam were identical to the below instructions given in the mock (except for having removed reference to the ‘calculator’ option).

As the excerpt from the mock exam above illustrates, the mark for review issue was definitely raised in the instructions and (fairly) clearly spelled out that the “mark for review” button would result in zero points if not returned to later.

However, the second part regarding the ‘clear response’ button was expressed less clearly in the ninth and tenth out of the 14 points of instructions provided to candidates (see screenshot below).

How to use the Clear Response button
How to use the Clear Response button

While on a careful reading points 3, 9 and 10 together, they do seem to strongly imply that you need to click the ‘clear response’ button to change your answer.

However, this could arguably have been stated much more clearly, such as by:

  • including reference to the ‘clear response’ button in the third point, which deals with ‘mark for review’, or
  • mentioning ‘mark for review’ again under point 10, preferably in bold.

In any case, it’s fair to say that instructions are fairly confusing.

On the other hand, you could argue that the language and instructions are no more obtuse than that used in terms and conditions, contracts or judgments that the budding lawyers may read one day.

And the mock exam (much like Facebook or other websites with long T&Cs that no one ever reads), also included a very unequivocal checkbox, stating: “I have read and understood the instructions.” (see screenshot below).

An early reading comp section?

While it is possible to chalk this up to a reading comprehension fail by candidates, it is also potentially a fairly understandable one for those used to other competitive online exams, most of which do not follow this kind of format requiring multiple button clicks.

On top of that, the stressful physical exam was held in the midst of a global pandemic. This could have brought with it additional issues that could have negatively affected candidates’ ability to properly parse the instructions before the clock officially started ticking on the exam.

One candidate told us: “In the actual exam, I marked all options for review out of habit. And as I was given a 13:45 reporting time, I didn’t get much time to actually read the instructions; neither did the people at the centre tell us about this.”

“This was unique to CLAT; no other exam does this kind of weird stuff,” they added.

CLAT consortium: Experts to respond by 3 October

We have reached out to CLAT consortium member and Nalsar Hyderabad vice-chancellor (VC) Prof Faizan Mustafa with the above issues.

(Mustafa had taken over the day-to-day management of the CLAT after the 3 September ‘Claxit’ of NLSIU Bangalore, whose VC and CLAT secretary Prof Sudhir Krishnaswamy had been handling the majority of the CLAT’s technical preparations and mock exams until then.)

Mustafa explained, on behalf of the consortium: “We have received many objections. The objections are being examined by the expert committee.”

He said that this expert committee would deliver its report by 3 October to the consortium’s executive committee and possibly also the governing body, and that thereafter the final answer key and (potentially revised) score sheets would be uploaded.

“The online exam has an inbuilt system of generating an audit trail of each and every candidate, which is the most authentic proof of what they really did,” Mustafa noted. “We are examining the audit trail of some of the candidates who have raised objections about the response sheets.”

“But what is the expert opinion of these sheets, we will know once the expert committee [makes its report on 3 October],” added Mustafa.

Specifically regarding the potential ‘mark for review’ issue, Mustafa noted: “It was clearly in the instruction and even in the mock marks were not counted when (clicked marked for review).”

Potential errors in answer key, TBC

The expert committee would also evaluate and recommend by the 3 October which questions, if any would be withdrawn.

As in nearly every CLAT year, we understand that candidates have raised issues about at least 10 questions in the answer sheet may have had wrong model answers or potentially multiple correct answers.

We have not yet been able to check all those in detail and it is likely that only a smaller number of those complaints will eventually be accepted.

“The expert committee that will recommend on withdrawal of questions will be ready by 3rd [October], when these will be placed before the executive committee [of the CLAT], and most likely the governing body, and thereafter the final answer key will be uploaded,” said Mustafa.

Auditing the 'mark for review' audit trail

It does not seem likely that a court would strike down the exam and order a new re-exam on the basis of the above issues, even if aggrieved candidates were to mount a legal challenge (as some have implied online; we had even been sent one piece of very badly-photoshopped fake news recently, which suggested the CLAT had announced it would hold re-takes).

But even if the legal risk may be limited, the CLAT could theoretically decide to help, though not without raising issues of its own.

If there is indeed an audit trail, which we understand may include a record of candidates’ clicks of the mouse on the exam screen, it should theoretically be possible for the system to automatically identify candidates who may have clicked ‘mark for review’ on every question they answered.

It should even potentially be possible to instruct the system with more complicated rules, which could identify all candidates who may have used mark for review without ever properly returning to the question.

And it might even be able to tell, who had used the ‘mark for review’ button and regularly selected a new answer without clicking ‘clear response’ first, therefore scoring zero across the board.

But while it might be technically possible for the CLAT consortium’s expert committee to retrospectively help all the students who may have technically made a mistake in using the exam software properly, this would also raise myriad difficult and abstract questions.

Notably:

  • Would it be fair to penalise candidates who did not read or follow the instructions properly?
  • Would it be fair to ‘reward’ or make exceptions for candidates who technically did not follow instructions properly, by awarding them points on the basis of how they thought the exam software worked?
  • What about candidates who may not have had time in the exam hall to read the instructions due to pandemic or other logistical issues? And is it possible to even reconstruct if a candidate had faced such issues?
  • Does having such instructions for using the exam software potentially disadvantage those from less privileged backgrounds, some of whom may have never used a computer and full web-browser before?
  • Alternatively, does it perhaps disadvantage those who have taken multiple other competitive exams that used similar systems and assumed the CLAT software would have worked similarly?
  • Finally, would the decision to make an exception potentially disadvantage those candidates who did follow the instructions properly?

Much like the 2020 CLAT, which was widely reported to be of a higher difficulty than the mock exams, there are no easy answers here.

CLAT 2020 mock exam instructions

Click to show 118 comments
at your own risk
(alt+c)
By reading the comments you agree that they are the (often anonymous) personal views and opinions of readers, which may be biased and unreliable, and for which Legally India therefore has no liability. If you believe a comment is inappropriate, please click 'Report to LI' below the comment and we will review it as soon as practicable.