Read more or try it out now below

See the preprint paper for more information on the method.
You can see some example outputs below.

Get the source code now at GitHub.

Want to see how it works?
Give VSR-SIM a try with your own SIM stacks.

You can also use the VSR-SIM Test Images that consist of simulated images and data from a SLM-based SIM microscopy.

Perform SIM reconstruction with VSR-SIM

🤗 Hugging Face App for VSR-SIM


Note that the currently served model is trained on 9-frame SIM stacks of 512x512 resolution. Furthermore, batch reconstruction of video sequence SIM data is not supported due to both bandwidth and computational limitations — see the Github repository to set this up. However, single SIM stacks with motion between frames should benefit from the reconstruction with VSR-SIM above.

Examples

See the difference between reconstructions from VSR-SIM and:

  • (a) wide-field projection of the SIM stack and
  • (b) our previous method ML-SIM
  • — source code also at Github: charlesnchr/ML-SIM

Microtubules imaged during sample drift. Motion blur manifests in the wide-field projection.


Simulated SIM image sequence from nature documentary dataset. Motion blur is reconstructed into motion artefacts.