this is a concept of how you can build a nodejs api with tensorflow to making your own prisma backend server with docker (credits to Tensorflow and neural-style)
- NVIDIA Cuda GPU
- Linux Server with nvidia-docker installed
- AWSS3 or Minio Server for saving the processed image
- download Pre-trained VGG network put into ./shared/model folder
- create a folder named
output
under project./shared/
navigate to the root folder of this project
nvidia-docker build -t atom2ueki/prisma:edge .
nvidia-docker run -d --name prisma -p 8080:8080 -v {PATH_TO_THIS_PROJECT}/shared:/home/app/node-server/shared -e "MINIO_ENDPOINT=$MINIO_ENDPOINT" -e "MINIO_ACCESS_KEY=$MINIO_ACCESS_KEY" -e "MINIO_SECRET_KEY=$MINIO_SECRET_KEY" atom2ueki/prisma:edge
- post url: http://localhost:8080/prisma
- parameters
- source_img: jepg image file
- style: String ( pick anyone from this array ['cosa', 'picasso', 'pop', 'prisma', 'scream', 'starry', 'wave'] )