Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Installation/Training/Inference Instructions #9

Open
atlury opened this issue Jan 20, 2017 · 1 comment
Open

Installation/Training/Inference Instructions #9

atlury opened this issue Jan 20, 2017 · 1 comment

Comments

@atlury
Copy link

atlury commented Jan 20, 2017

@naibaf7 Can you please be kind enough to provide some instructions for installation/training and inference?

  1. I have "already" caffe compiled and installed (opencl with Libdnn, ubuntu).
    Now do I set the option compile_caffe = false in config.py? How will the pygreen setup find the installed caffe lib? or Just run setup.py?

  2. The net.proto for example will still need INTEL_SPATIAL get opencl acceleration? I dont see them here
    https://github.com/naibaf7/PyGreentea/blob/master/examples/2D_usk_malis_softmax/net.prototxt

  3. Any idea how many images you used to get good accuracy? I read that image medical datasets were limited and you use other means to increase the quantity of images?

  4. For inference is test.py is the right place?

More questions once I starting running them. Thanks for the patience. :-)

@naibaf7
Copy link
Owner

naibaf7 commented Jan 20, 2017

  1. It expects the PyGreentea to be in the same folder as the caffe git repository. So you should put both repositories on the same level. Alternatively you can change the import path in PyGreentea.
  2. OpenCL acceleration is enabled when you compile with it enabled. No need to change the prototxt. SK convolutions are NOT supported by Intel spatial, only LibDNN and standard Caffe can do it.
  3. For the thesis I used up to 20 images of size 1024x1024. This is going to be different if you want to do object detection/scene segmentation though; you will need more. Maybe you can give some more information what you try to do. If you have AlexNet pre-trained for recognition and want to segment a whole scene you might want to try to just convert AlexNet to a SK-AlexNet and use that for inference. This is tricky, but possible. On a recent AMD GPU with LibDNN it can segment up to a megapixel per second nowadays. Not sure what you'll get with Intel. If you can downscale the resolution of the images it might be a bit faster, although each SK network has an efficieny sweet-spot.
  4. test.py is inference, correct.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants