My favorites | Sign in
Project Home Wiki Issues Source
Project Information
Members

This is an update to cuda-convnet.

This project has three major new features relative to cuda-convnet:

  1. Improved training times on Kepler-generation Nvidia GPUs (Geforce Titan, K20, K40).
  2. Multi-GPU training support implementing data parallelism, model parallelism, and the hybrid approach described in One weird trick for parallelizing convolutional neural networks.
  3. Less-polished code and incomplete (but improving) documentation.

Documentation

Usage

Reference

  • Arguments -- listing of command-line arguments
  • NeuronTypes -- listing of supported neuron activation functions
  • LearningRates -- listing of supported learning rate schedules

Contact

Powered by Google Project Hosting