Skip to content

Latest commit

 

History

History
49 lines (25 loc) · 2.97 KB

README.md

File metadata and controls

49 lines (25 loc) · 2.97 KB

Latently, Bitfusion, and IBM Cloud enable democratized access to deep learning

Earning the Latently Deep Learning Certificate

Professional track

To implement a paper on our GPU cluster send your resume to brian@latent.ly. We will reply with an invite to our Paperpile-based bibliography which is updated with the latest advances every day. We will then set up a Google Hangout and work with you to pick a paper that is suitable for your skill level and get you an account on our GPU cluster which runs Bitfusion Flex. Flex allows you to spin up a Jupyter notebook in which even remote GPUs appear local, making it possible to do large scale data-parallel training with relative ease.

Research track

More advanced candidates may use our hardware and mentorship resources to conduct original research and to publish it in a lightweight fashion (i.e., on the arXiv, or here).

Business

For those who are more business oriented we will work with you to design a novel deep learning architecture using state-of-the-art methods on our GPU cluster using i.e. Tensorflow.

Academic

For those who are more academically oriented you may use our resources on the Comet and Stampede supercomputer (currently #20 in the world) to publish a model built using either emergent, NEURON, Genesis3, Moose, NEST, PyNN, Brian or Freesurfer. At this time we can only provide mentorship for emergent. Note that research done on The Neuroscience Gateway must advance the state of the art in computational neuroscience, computational cognitive neuroscience, cognitive computational neuroscience or a related field.

Implementation guidelines

In general it is preferable to replicate a result. This implies reproducing the plots, statistics and results in the paper. Sometimes this isn't possible, in which case the nearest approximate implementation is sufficient.

Code is subject to code review and should be factored out into reusable modules and functions. Comments are appreciated.

All implementations should have a well-documented README and when possible the code should run in a Jupyter notebook.

Don't look at other existing implementations of your paper, otherwise the author of those implementations may own your code.

Frequently Asked Questions

What languages / frameworks can I use?

In general we prefer you use Tensorflow but that is not a hard rule.

How long does it take?

This really depends on the paper and your skill level. Estimate between 1 week and 3 months.

Who owns the code?

For now all code is licensed into the public domain.

Is there an example?

We will have a gold standard example up soon - for now check out this implementation: https://github.com/Latently/DLC/tree/master/MikolovJoulinChopraEtAl2015