Skip to content

Unmix presentation and content for docpages #3786

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Jan 15, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 12 additions & 0 deletions app/grandchallenge/core/static/css/base.scss
Original file line number Diff line number Diff line change
Expand Up @@ -54,3 +54,15 @@ blockquote {
div.codehilite {
margin-bottom: $paragraph-margin-bottom;
}

.docpage h3:first-of-type {
text-align: center;
}

.docpage h3:has(+h4){
margin-bottom: 2 * $paragraph-margin-bottom;
}

.docpage :is(h1, h2, h3, h4, h5, h6) {
font-weight: bold;
}
Original file line number Diff line number Diff line change
Expand Up @@ -7,5 +7,7 @@

<h2>Update Page</h2>

{% crispy form %}
<div class="docpage">
{% crispy form %}
</div>
{% endblock %}
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,7 @@
{% endif %}
</div>
{% else %}
<div class="mt-4" id=pageContainer>{{ currentdocpage.content|md2html }}</div>
<div class="mt-4 docpage" id=pageContainer>{{ currentdocpage.content|md2html }}</div>
<div class="row">
<div class="d-inline-block col-6 text-left">
{% if currentdocpage.previous %}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,9 @@

{% block content %}
<h2>{% if object %}Update{% else %}Create{% endif %} Page</h2>
{% crispy form %}
<div class="docpage">
{% crispy form %}
</div>
{% endblock %}

{% block script %}
Expand Down
345 changes: 345 additions & 0 deletions scripts/create_docpages.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,345 @@
from grandchallenge.documentation.models import DocPage


def run():

DocPage.objects.create(
title="Getting Started",
content="""### How to use Grand Challenge



#### <i class="fas fa-users mr-2"></i> Participate in Grand Challenge in just a few steps

[Register](https://grand-challenge.org/documentation/registration/) your account, request [verification](https://grand-challenge.org/documentation/verification/) and you're all set to participate.

![](/static/images/challenge.png)



#### <i class="fas fa-graduation-cap mr-2"></i> Learn how to use and create your own algorithms, challenges or reader studies

Check out our hands-on tutorials on [algorithms](https://grand-challenge.org/documentation/algorithms/), [challenges](https://grand-challenge.org/documentation/challenges/) and [reader studies](https://grand-challenge.org/documentation/reader-studies/).



#### <i class="fas fa-laptop-code mr-2"></i> Make use of our public API

Uploading your data or running your algorithm on Grand Challenge can be done through the website, but you can also interact with Grand Challenge programmatically through the API. Learn more about how to use the GC API [here](https://grand-challenge.org/documentation/grand-challenge-api/).



#### <i class="fas fa-question-circle mr-2"></i> Still got questions?

If you cannot find the answer to your question here or in the [forum](https://grand-challenge.org/forums/forum/general-548/), feel free to contact us at [support@grand-challenge.org](mailto:support@grand-challenge.org).



#### <i class="fas fa-code-branch mr-2"></i> Contributing to Grand Challenge

Would you like to contribute new features to Grand Challenge or spin up your own instance? Have a look at our [developer documentation](https://comic.github.io/grand-challenge.org/).""",
)

DocPage.objects.create(
title="Developing an Algorithm from a Template",
content="""### Developing an Algorithm from a Template

If you wish not to worry about how data is loaded and written, and you just want to get your Algorithm on the platform with as few lines of code as possible. Then finalizing an (almost) working custom-made template might be the easiest way.

We'll be using a [demo algorithm template](https://github.com/DIAGNijmegen/demo-algorithm-template) for this instruction. However, downloading an updated and tailored algorithm template should be preferred!

Algorithm editors can find the templates via **<i class="fas fa-hard-hat fa-fw"></i> Templates -> <i class="fa fa-download pl-1"></i> Download Algorithm Image Template**:


<img class="border shadow" src="/static/images/archive.png" style="width: 40em;"/>



#### Reference Algorithm

In this tutorial, we will build an Algorithm image for a U-Net that segments retinal blood vessels from the [DRIVE Challenge](https://drive.grand-challenge.org/).

The image below shows the output of a very simple U-Net that segments vessels.

<img src="/static/images/annotation.png" style="width: 40em;"/>

To start the process, let's clone the repository that contains the weights from a pre-trained model and the Python scripts to run inference on a new fundus image.

```
$ git clone https://github.com/DIAGNijmegen/drive-vessels-unet.git
```




#### Create a base repository using the algorithm template

The templates provide methods to wrap your algorithm in Docker containers. Just execute the following command in a terminal of your choice:

```
$ git clone https://github.com/DIAGNijmegen/demo-algorithm-template
```

This will create a templated repository with a [Dockerfile](https://docs.docker.com/engine/reference/builder/) and other files.

The scripts for your container files were automatically generated by the platform. It includes bash scripts for building, testing, and saving the algorithm image:

```bash
├── Dockerfile
├── README.md
├── do_build.sh
├── do_save.sh
├── do_test_run.sh
├── inference.py
├── requirements.txt
├── resources
│   └── some_resource.txt
└── test
└── input
├── age-in-months.json
└── images
└── color-fundus
└── 998dca01-2b74-4db5-802f-76ace545ec4b.mha
```



#### Running the test

It is informative to try and run algorithm image as a container on your local system. This allows for quick debugging without the need for the--somewhat slow--saving and uploading of the image.

There is a helper script for this which has the correct docker calls:

```bash
$ ./do_test_run.sh
```

This should output some basic docker build commands and all the stdout and stderr printing the template currently has. Note that on the first run, the build process might take a while since it needs to download some large image layers.




#### Inserting the Algorithm

The next step is to edit _inference.py_. This is the file where you will insert the implementation of the reference algorithm.

In the _inference.py_, a function, `run()`, has been created for you, and it is instantiated and called with:

```python
if __name__ == "__main__":
raise SystemExit(run())
```

The default function `run()` generated by the platform does simple reading of the input and saving of the output. In between reading and writing, there is a clear point where we are to insert the reference algorithm:

```Python
def run():
# Read the input
input_color_fundus_image = load_image_file_as_array(
location=INPUT_PATH / "images/color-fundus",
)
input_age_in_months = load_json_file(
location=INPUT_PATH / "age-in-months.json",
) # Note: we'll be ignoring this input completely

# Process the inputs: any way you'd like
_show_torch_cuda_info()

with open(RESOURCE_PATH / "some_resource.txt", "r") as f:
print(f.read())

# TODO: add your custom inference here

# For now, let us make bogus predictions
output_binary_vessel_segmentation = numpy.eye(4, 2)

# Save your output
write_array_as_image_file(
location=OUTPUT_PATH / "images/binary-vessel-segmentation",
array=output_binary_vessel_segmentation,
)

return 0
```

The reference algorithm is found in a similar file _[reference-algorithm/inference.py](https://github.com/DIAGNijmegen/drive-vessels-unet/blob/master/inference.py)_.

We'll copy over the relevant part, adding `import` at the top of our python script as needed. Including some pre and postprocessing of the images. First, we'll start with the torch device settings and initializing the model:

```Python
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

# Initialize MONAI UNet with updated arguments
model = monai.networks.nets.UNet(
spatial_dims=2,
in_channels=3,
out_channels=1,
channels=(16, 32, 64, 128, 256),
strides=(2, 2, 2, 2),
num_res_units=2,
).to(device)
```

Next, we'll load in the weights.



##### 🔩 Copying your model weights into the image

Ensure that you copy all the files needed to run your scripts, including the model weights, into `/opt/app/`. This can be configured in the Dockerfile using the COPY command. If your model weights are stored in a `resources/` folder, they are already copied into the image. This is done via this line of the Dockerfile:

```
COPY --chown=user:user resources /opt/app/resources
```

For now, we'll be copying the `best_metric_model_segmentation2d_dict.pt` from our reference Algorithm into the `resources/` directory.

Of course, we'll still need to load the weights into our initialized model by adding the following line to _inference.py_:

```Python
model.load_state_dict(torch.load( RESOURCE_PATH / "best_metric_model_segmentation2d_dict.pth"))
```




##### 🛠️ Processing Input and Output

The input is already read, but generally, we need to convert it a bit to work with our algorithm. We're going to hide that with a `pre_process` function. The same holds for the output: we are already writing a numpy array to an image, but we might need to perform some thresholding after our forward pass. We'll do that with a `post_processing` function of our design. In _inference.py_ we'll combine the processing with the forward pass:

```Python
input_tensor = pre_process(image=input_color_fundus_image, device=device)

# Do the forward pass
with torch.no_grad():
out = model(input_tensor).squeeze().detach().cpu().numpy()

output_binary_vessel_segmentation = post_process(image=out, shape=input_color_fundus_image.shape)
```



##### 🏗️ Combining everything

Finally, we should end up with an updated _inference.py_ that will look something like this:

```Python
def run():
# Read the input
input_color_fundus_image = load_image_file_as_array(
location=INPUT_PATH / "images/color-fundus",
)
input_age_in_months = load_json_file(
location=INPUT_PATH / "age-in-months.json",
)

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

# Initialize MONAI UNet with updated arguments
model = monai.networks.nets.UNet(
spatial_dims=2,
in_channels=3,
out_channels=1,
channels=(16, 32, 64, 128, 256),
strides=(2, 2, 2, 2),
num_res_units=2,
).to(device)

model.load_state_dict(torch.load( RESOURCE_PATH / "best_metric_model_segmentation2d_dict.pth"))

# Ensure model is in evaluation mode
model.eval()

input_tensor = pre_process(image=input_color_fundus_image, device=device)

# Do the forward pass
with torch.no_grad():
out = model(input_tensor).squeeze().detach().cpu().numpy()

output_binary_vessel_segmentation = post_process(image=out, shape=input_color_fundus_image.shape)

# Save your output
write_array_as_image_file(
location=OUTPUT_PATH / "images/binary-vessel-segmentation",
array=output_binary_vessel_segmentation,
)

return 0

def pre_process(image, device):
# Step 1: Convert the input numpy array to a PyTorch tensor with float data type
input_tensor = torch.from_numpy(image).float()

# Step 2: Rearrange dimensions from [height, width, channels] to [channels, height, width]
input_tensor = input_tensor.permute(2, 0, 1)

# Step 3: Add a batch dimension to make it [1, channels, height, width]
input_tensor = input_tensor.unsqueeze(0)

# Step 4: Move the tensor to the device (CPU or GPU)
input_tensor = input_tensor.to(device)

# Calculate padding
height, width = image.shape[:2]
pad_height = (16 - (height % 16)) % 16
pad_width = (16 - (width % 16)) % 16

# Apply padding equally on all sides
padding = (pad_width // 2, pad_width - pad_width // 2, pad_height // 2, pad_height - pad_height // 2)

return F.pad(input_tensor, padding)


def post_process(image, shape):
image = transform.resize(image, shape[:-1], order=3)
image = (expit(image) > 0.80)
return (image * 255).astype(np.uint8)

```

There are a few things we still need to do before we can run the algorithm image.



#### Updating the Dockerfile

Ensure that you import the right base image in your Dockerfile. For our U-Net, we will build our Docker with the [official PyTorch Docker](https://hub.docker.com/r/pytorch/pytorch) as the base image. This should take care of installing PyTorch with the necessary CUDA environments inside your Docker. If you're using TensorFlow, please build your Docker with the official base image from TensorFlow. You can browse through [Docker Hub](https://hub.docker.com/) to find your preferred base image. The base image can be specified in the first line of your Dockerfile:

```
FROM pytorch/pytorch
```

Here are some [best practices](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/) for configuring your Dockerfile.



##### 📝 Configuring requirements.txt

Ensure that all of the dependencies with their versions are specified in **requirements.txt** as shown in the example below:

```
SimpleITK
numpy
monai==1.4.0
scikit-learn
scipy
scikit-image
```

Note that we haven't included `torch` as it comes with the PyTorch base image included in our Dockerfile in the previous step.



##### 🦾 Do a test run locally

Finally, we are near the end! Add a good example input image in the `test/input/color-fundus` and run the local test:

```bash
$ ./do_test_run.sh
```

This should create a local Docker image, spawn a container, and do a forward pass on the input image. If all goes well, it should output a binary segmentation to `test/output/images/binary-vessel-segmentation`.
""",
)

print("Example docpages created.")
Loading