In this document you can find a description of most of CodeJudge features (as we continously develop on the system, the system might have undocumented features that are not yet "released"). The document is intended for people who set up exercises (like professors, teachers, TAs, etc.) and not students.

You are always welcome to contact us on if you have any questions.


Besides this reference document, we have the following guides available. These will most likely be the best place to start for newcomers:

Admin Interface

The green menu items are the admin features while the white items are for the users. While the users do not have access to the admin features, you will have access to everything on the course, even hidden content to the users on the user pages (e.g. test data, 'regrade'-button, etc). The content only admins can see on the user pages are marked with a small key . You can see exactly what is visible to the users by clicking "see as user" link in the dropdown shown when you click on your login name.

Exercises On the exercise pages, besides viewing and solving exercises, course admins can also edit exercise groups and exercises. The pages are separated into different tabs, where the admin-only tabs are marked with the key . See Exercises for more details on the available tabs.

Assignments Similar to the exercise pages but for assignments, see Assignments for more details.

Submissions On the Submissions page a complete list of all submissions on the course is shown. The submissions can be filtered between dates or a specific submission can be found by its id. The latter is useful when an user has a question to a specific submission, since it can simply be retrieved using its id (which is also visible to the user).

Statistics On the Statistics page general statistics of the course activity can be seen.

Reports On the Reports page a csv report can be generated containing various data of selected exercises.

Users On the Users page all the users inclusive administrators on the course are listed. The users can have a tag, that can be used to distinguish different users. Specific information about an user, including solved exercises, attempts and all submissions, can be seen by clicking on the user in the list.

Settings On the Settings page, different configurations for the course can be set. See Settings for further details.


On the exercise pages you can create and edit normal exercises grouped in exercise groups. The normal exercises are meant to be non-mandatory training exercises, which means the users will be able to submit solutions at any time, and the best submission will determine the exercise status (solved, attempted or un-attempted). Assignment exercises are on the other hand meant to be handed in, see assignments for further details.

The exercise pages are split into different tabs separating the admin-functionality from the user functionality. The 'View' tabs show what the user will see, while the 'Edit' tabs are where the exercises/exercise groups can be edited. On the individual exercises, there are also tabs where submissions and some statistics for the exercise can be seen.

Exercise Group

An exercise group is a set of normal exercises. When creating/editing an exercise group the following properties can be set:

The name shown in the list of exercise groups (like 'Week X' etc).
Visible From
The date and time from which the exercise group will be visible for the users.


An exercise has the following properties:

The name shown in the list of exercises in the exercise group.
A description of the exercise visible to the users. The description can be styled using markdown.
Select the programming languages which the exercise may be solved in. Selecting "Default Languages" will allow the default language(s) of the course, see settings.
Attached Files
Files such as templates, header-files can be attached to an exercise, which then will be available to the users. Make sure not to upload test data here.


The solution to the exercise can be uploaded (optional) and can serve two purposes: The solution can be used to generate the output of the test data automatically (see expected output) and/or the solution can be used as a suggested solution shown to the users. For the solution the following properties are thus available:

Solution Files
Upload the solution file(s) to the exercise (as users would have to).

Show Suggested Solution

Select when if/when a suggested solution should be shown to the users:

  • After exercise is solved - the suggested solution will only be visible to the user after he/she has solved the exercise.
  • Always - the suggested solution will always be visible to the users.
  • Never - the suggested solution will never be visible to the users.

Suggested Solution Type

If the suggested solution is visible at some point (not set to "Never"), then the type of the suggested solution can be selected:

  • Use solution files - the uploaded solution files will be shown as the suggested solution.
  • Custom - a custom suggested solution can be written, see next option.
Suggested Solution Text
The suggested solution can be written in the text area using markdown.

Test Groups

Finally the test data should be uploaded. The tests are uploaded in test groups, which are sets of tests with the the same configuration (see configuration). An exercise can have as many test groups as you want. For each test group the following properties can be set:

The name of the test group, visible to the user (e.g. Tests, Sample tests, etc).

Run If

Select when the test group should be run:

  • Always - the test group will always be run when a new submission to the exercise is made.
  • [TestGroup] succeeds - the test group will only be run if the test group [TestGroup] succeeds, i.e. all tests are passed.
  • Never - the test group will not be run automatically. You can manually run it on submissions afterwards.


Select what should be visible to the users after submitting a solution:

  • All - All test data and judge output will be visible.
  • No test files - Only the judge output will be visible.
  • No test files or judge - Only the result (Succeeded, Wrong Answer, etc.) of each test is visible.
  • Not visible - The entire test group will not be visible.
(Overwrite) Test Data
All the relevant test data to the test group should be uploaded either as individual files or as a zip-file. The new test data will overwrite any existing test data. For details on the test data files, see test data.
Advanced Configurations
A number of advanced configurations can also be specified. See Configuration for more details.

For training exercises we suggest having only one test group with full feedback (visibility set to All), and execution mode to until failure (see configuration). With these settings the users can focus on correcting one test at a time. We also encourage you to upload a suggested solution and let it be visible after the exercise is solved, because we believe that the best way to learn is by trying yourself first, and then afterwards see the correct solution for comparison.


Exercises and exercise groups can be imported to CodeJudge by uploading a single zip-file. To import a single exercise file the uploaded zip-file should contain a folder with the exercise name and the following files in it:

NOTE: This function is very strict with the contents of the uploaded file, and only limited error messages helping diagnosing problems are currently available.

A JSON-file containing the exercise properties.
(optional) The description of the exercise.
(optional) The suggested solution of the exercise.
Folder(s): [Test group name]
Folder(s) containing the test data. For details on how to set up the actual test data, see test data.

To import an entire exercise group, simply place all the exercise folders in a zip file and upload it. Exercises and exercise groups can also be exported in the same format as described above.

Test Data

CodeJudge supports various forms of testing methods which suits different types of exercises. A test group consists of one or more tests. A test may consist of a number of parameters: standard input (in), command line arguments (args), expected output (out), a hint (hint), a score (score) and/or a test script (described in the following). To create a test group one must create the appropiate test files, e.g. files specifying the input and the expected output.

Suppose we want to create three tests for a program adding two numbers from standard input. The tests could be "5 7" with expected output "12", "1 2" with expected output "3" and "-9 9" with expected output "0". There are two ways to structure the test files: either in a single file or in many files (the first is most practical when created manually and the second when generated with scripts). For the first approach the above test cases can be created by a single file "" containing:

/// Test
5 7
/// Out: 12
/// Test
1 2
/// Out: 3
/// Test
-9 9
/// Out: 0

The general pattern for this approach is a file named "Xxxx.[parameter]" where [parameter] can be any of the types described below. In this file, each test case should be started by a line with "/// Test". Tests may also be explicitly terminated by "/// EOT" (End Of Test). All lines not within a test are considered common to all tests. By default tests created this way are named Xxxx01, Xxxx02, .... If you like, you can specify the name of a test by "/// Name: [name]" (the name may not contain whitespaces). Finally, in any "Tests.[parameter1]" file one can write "/// [parameter2]: [value]" to set the contents of [parameter2] to [value] for a test (this is especially practical for hints and scores). (Note: specifying expected output is normally not necessary, see Expected Output for further details.)

Using the other approach, the test data could be saved in 6 different files, called, Test01.out,, Test02.out, and Test03.out respectively. The general pattern here is simple; each parameter must be specified in a single file per test case with the name "[test].[parameter]".

A number of examples on how tests can be set up, can be found in "Quick Start: How to create test data for my exercise?".


Besides the parameters for each test, a test group can have some common configuration settings. A test group can be configured either directly on the CodeJudge site when uploading or by uploading a file called "testgroup.json" together with the rest of the test files. The different parameters can be seen on CodeJudge when editing a test group under "Show advanced".

Expected Output (out)

When a program is tested its output is compared against the expected output, therefore the expected output is a vital part of a test. The way which the output is compared to the expected output is determined by the "judge" being used (see the section "Judges" for more information). Expected output can either be specified in files in the test data, or more practically it can be generated automatically by CodeJudge. If you want CodeJudge to generate it for you, simply upload a solution before uploading your test data.

Command Line Arguments (args)

Command Line Arguments, or just arguments, can be specified. See the documentation for your language to see how to get access to command line arguments.

Standard Input (in)

Standard Input also known as console input is the most commonly used option besides test scripts.

Files in working directory (TestXX/wkdir/*)

All files and directories placed in the path /TetsXX/wkdir/ will be copied to the working directory of the users program when executed. This is useful if you want the users to learn about file access.

Test Scripts

A test script is a program written in the same language as the submission, which will be executed in combination with the submitted files. How this is done depends on the language. A test script is in many ways equivalent to a unit test. For instance, one could make an exercise where the users must implement a function average(a, b) that will return the average of a and b. In order to test it, you can upload a number of test scripts calling average(a, b) with different arguments. The easiest way to learn how to make test scripts, is to look at our samples in "Quick Start: How to create test data for my exercise?".

Java (java)

In java the test script must be a fully functional Java program, except it may call methods the users are supposed to provide. That is it must consist of a public class with a normal public static void main(String[] args) method. For instance:

public class Test01 {
    public static void main(String[] args) {
        System.out.println(Calculator.average(4, 9));

If you have not uploaded a solution file before uploading your test data, you are also required to add java files in the directory dummy/ that contains the classes and methods the users are supposed to implement. It is highly recommended not to use this approach, but simply upload your solution instead. These classes/functions do not need to be functional, but they must be compileable. In other words, if you combine the test script file and these extra files, they must be able to compile. For the above example, we should add the file dummy/

public class Calculator { public static dobule average(double a, double b) { return 0; // Dummy, since it must be compileable } }

Please do not use packages in test scripts / solutions. In this way, we can support users putting the classes in their own packages.

Remaining Languages

See examples.

Hints (hint)

You can add a hint to a test case which will be available to the user, if the users fails the test (currently hints are shown no matter how a test fails). Hints are supposed to be short for example "Did you consider negative numbers?" or similar.

Score (score)

A test can have an associated score. This is useful for competitions and grading purposes. The score must be a single number. Higher scores are considered better. The score of a submission is the sum of all the scores of the test cases it passes.

Size (size)

A test might be given a "size". Can be used for plotting running time VS size - useful for analyzing the assymptotic running time of a solution.

Language Support

Below you see a table of all languages currently supported on CodeJudge.

Test Scripts
Java 8
Arguments: YES
Input: YES
Files: YES
Test Scripts: YES
C (gcc 7.3.0)
Arguments: YES
Input: YES
Files: YES
Test Scripts: NO
C++ (C++14, g++ 7.3.0)
Arguments: YES
Input: YES
Files: YES
Test Scripts: YES
C++11 (g++ 7.3.0)
Arguments: YES
Input: YES
Files: YES
Test Scripts: YES
C++1z (g++ 7.3.0)
Arguments: YES
Input: YES
Files: YES
Test Scripts: YES
C# (Mono 5.14)
Arguments: YES
Input: YES
Files: YES
Test Scripts: NO
F# (4.0)
Arguments: YES
Input: YES
Files: YES
Test Scripts: YES
Python3 (3.6)
Arguments: YES
Input: YES
Files: YES
Test Scripts: YES
Python2 (2.7)
Arguments: YES
Input: YES
Files: YES
Test Scripts: YES
Matlab (2018a)
Arguments: NO
Input: YES
Files: YES
Test Scripts: YES
R (3.4.2)
Arguments: YES
Input: YES
Files: YES
Test Scripts: YES
Bash (4.4 GNU)
Arguments: YES
Input: YES
Files: YES
Test Scripts: NO
Prolog (SWI-Prolog 7.4)
Arguments: YES
Input: YES
Files: YES
Test Scripts: YES
Rust (1.29)
Arguments: YES
Input: YES
Files: YES
Test Scripts: YES
Pascal (fpc 3.0)
Arguments: YES
Input: YES
Files: YES
Test Scripts: NO
Coq (8.5pl3) [beta]
Arguments: YES
Input: YES
Files: YES
Test Scripts: NO
Elixir (1.5) [beta]
Arguments: YES
Input: YES
Files: YES
Test Scripts: NO
Haskell (8.0.2) [beta]
Arguments: YES
Input: YES
Files: YES
Test Scripts: NO
Go (1.9) [beta]
Arguments: YES
Input: YES
Files: YES
Test Scripts: NO
Arguments: YES
Input: YES
Files: YES
Test Scripts: NO

* Please note: Matlab support is only available if you have a special agreement with us.


A Judge is the program on the grader which evaluates the output of the users' programs. For each test case the output of a user's program is compared to the expected output file (usually), and if they match (the matching criteria depends on the judge), the test is passed.

CodeJudge currently supports the three following judges: Exact Judge, Token Based Judge and Custom Judge.

Exact Judge

The Exact judge checks if the output of the user's program exactly matches the output file, that including all white spaces, new lines etc. The only exception is that \r characters are ignored and lines break at the very end of the file.

This judge is expecially useful for exercises where strings including white spaces should be printed (so they match 100%), and can also be used for test scripts since the user should not be printing the output here.

Token Based Judge

The TokenBased judge also compares the output of the user's program with the expected output, but any sequence of white spaces including new lines is considered as one single white space, so how the user chooses to separate the output won't matter.

This judge is useful for input/output exercises where the users have to print the output themselves and the white spaces don't matter. This judge also has a number of configuration options (for decimal comparisons, case-sensitivity, etc)

Custom Judge

If you have a more advanced exercise where one of the above judges cannot evaluate the programs properly you can write your own custom judge. The judge can be written in any of our supported languages.


To upload the custom judge, you must add a folder named judge/ when you upload the test fils and place the source code file of the judge in this folder.

Accessing data

If the users program runs successfully, the custom judge will ben ran in the same working directory with two additional files; expected (the expected output) and output (the user output) without file extensions, which can be read by the judge. If the test has an input file, the content of this file can be read from standard input by the judge.

Evaluating the program

After evaluating the program, the judge can output the following lines to standard output to save the results (only the first is mandatory):

RESULT [result]
[result] must be CORRECT if the test case is accepted, otherwise WRONG. (mandatory)
TEXT [text]
An optional message to the users about this single test run. For instance, if the test was passed it could be "Correct" as for the other judges, or if the test failed it could be an error message like "Saw a but expected b".
SCORE [score]
A score can indicate how well they passed the test case. Can be used for optimization problems.
FILE [file]
Specifies that [file] should be copied from the working directory, and will be shown together with the results of the test in CodeJudge.

If the judge program terminates with an error, the system will mark it as a system error.


A few templates and examples of custom judges will be available here later.

Execution Environment

All code submitted to our system will be evaluated on dedicated graders. A grader evaluates at most one submission at a time, and does nothing else. The underlying operating system is a 64-bit Ubuntu Server. The submissions are graded in a secured environment. Among other things, this prevent programs from access to the internet during execution. Similarly file access is restricted such that only a limited subset of files can be edited on the system.

Our current generation of graders run on Intel Core i3-8100 CPUs and 8GB of RAM. In very rare cases, we may utilize alternative graders with other specifications.


An assignment consists of a set of exercises like an exercise group, but since the solutions to these exercises are to be handed in, the exercises have a due-date. Here the last submission to each exercise before the due date will count as the solution handed in. (If by mistake an user submits a wrong solution as the last, the users other submissions can still be seen in the list of all submissions.)

The assignment pages are structured similarly to the exercises pages. Besides the tabs presented on the exercise pages, an assignment also has a "Hand-ins" tab (see Overview of Hand-ins, and an assignment exercise also has a "Plagiarism" tab (see Plagiarism).

Creating an Assignment

An assignment have the following properties.

The name of the assignment.
Visible From
The date and time from which the assignment will be visible for the users.
Due Date
The due date for the assignment.
Time Limit
The assignment can have a time limit, which will be the amount of time the users have to solve the exercises. The time starts when the user 'opens' the assignment, and only then will the exercises be available. After the time has passed the user will not be able to submit solutions anymore.
Post Submissions
Selects whether or not the users can submit solutions after the due date (or time limit has passed). These submissions will not count as handed in, but can be used after feedback has been given to try and correct possible mistakes.
Selects whether or not group hand in is allowd. Note: groups are not supported for time limited assignments at the moment.
Allow Comments
Selects whether or not the users can write a comment on the assignment, which can be seen with their hand-in.
Visible to Authors
Selects whether or not the assignment is visible to course authors before the "Visible From" date.
A description of the overall assignment. Each exercise still has their own description.

The exercises of an assignment are created exactly as training exercises, see exercise.

For assignments we recommend to always have a test group called 'Sample Tests' (or similar) consisting of a few small test cases, with full feedback, and then have another test group with all the hard tests, which is either invisible or has no feedback, depending on what suits the course best. The sample tests will help the users correcting very stupid mistakes in their handed in solution, but they will still not know if the solution is completely correct.

Overview of Hand-ins

Under the 'Hand-ins' tab on the assignment page a complete overview of all hand-ins to the assignment are shown. Here all users/groups are listed, and for each group/user it can be seen what exercises they solved or attempted. By clicking on one user/group a hand-in page is shown, where the hand in submissions can be opened. On this page a possible comment from the users is also shown, and feedback can be written to the user/group.


After the users have handed in, a plagiarism report can be generated per exercise. The plagiarism check compares the solutions two by two, and find identical sequences (variable names are all considered identical and white spaces are ignored), all solutions are then ranked by the amount of code that is identical to another solution. An overview of all solutions' plagiarism score to an exercise can be seen in the 'Plagiarism' tab on the exercise. By clicking on a user/group here, it can be seen exactly what code was identical to the compared solution.


In the settings page you can set some general settings for your course, add a FAQ and/or add custom pages to the course.


The following general settings can be configured for the course

Course Name
The name of the course
Short Name
The short name of the course.
Features to display
Here you can select if the pages 'Exercises' and/or 'Assignments' should be visible in the menu.
Default languages
The default programming languages of the course. These languages will be used for exercises where 'default languages' have been chosen.


You can add a FAQ to your course, which will be shown in the 'Help' page. To add a FAQ entry you just need to write the following.

The title of the FAQ entry (usually a question), which will be shown in the list of entries in the 'Help' page.
The description of the entry (usually an answer to the question), which will be shown when an user clicks on the entry.


You can add extra pages to the user menu. When adding a paging you have the following properties to set:

The title of the page shown just below the menu when the page is opened.
The name of the page in the user menu.
The name of the page in the url. The url to the page will be[COURSE URL NAME]/page/[Url]
Requires login
Select if an user should be logged in to the course to view the page.
Requires admin
Select if the page should only be visible to course authors and admins.
The content of the page using markdown. Each header (#) and the following content until the next header will be placed in an area (white box).


If you have any questions regarding CodeJudge, you get some unexpected errors or you have any suggestions for improvements, please feel free to contact us. You can normally expect answers within 24 hours.