English|中文
Ascend Computing Language (AscendCL) provides a collection of C language APIs for users to develop deep neural network apps for object recognition and image classification, ranging from device management, context management, stream management, memory management, to model loading and execution, operator loading and execution, and media data processing. You can call AscendCL APIs through a third-party framework to utilize the compute capability of the Ascend AI Processor, or encapsulate AscendCL into third-party libraries, to provide the runtime and resource management capabilities of the Ascend AI Processor.
This repository provides a wide range of samples developed based on AscendCL APIs. When developing your own samples, feel free to refer to the existing samples in this repository.
Please select the product you are using on the Hardware Platform Page, select the supported CANN version from the drop-down box and check the matching relationship.
-
The current branch sample version adaptation instructions are as follows:
CANN version >=5.1.RC2.alpha006 -
For historical version, please refer to CANN Version Description.
-
Historical version operation instructions tag: Tag the code repository at a certain point in time, and use the tag tag when releasing a software version (such as v0.1.0, etc.) to add tags to the items in the repository. It can be understood as a branch that does not change at a certain time. Realease: Based on the tag, add richer information to the tag, usually a compiled file.
- Users can select the corresponding tag (tag) in the branch switch box of the warehouse to view the code and readme of the corresponding version.
- Users can download the compiled file (Source code) provided by realease for code use.
- If you need to access the tag code on the command line, you can do as follows.
# Command line download master code git clone https://github.com/Ascend/samples.git # Switch to the historical tag, take v0.1.0 as an example git checkout v0.1.0
directory | description |
---|---|
common | Sample warehouse public file directory |
cplusplus | C++ sample directory of samples warehouse |
python | samples warehouse python sample directory |
st | Samples warehouse sample test case directory |
Sample Name | Language | Product | Description |
---|---|---|---|
DVPP interface sample | c++ | >=5.0.4 | Call the relevant interface of dvpp to realize image processing. Includes crop/vdec/venc/jpegd/jpege/resize/batchcrop/cropandpaste and other functions |
Custom operator sample | c++ | >=5.0.4 | Custom operator operation verification, including Add operator/batchnorm operator/conv2d operator/lstm operator/matmul operator/reshape operator, etc. |
200DK peripheral sample | c++ | >=5.0.4 | 200DK peripheral interface related cases, including functions such as configuring GPIO pins/using i2c to read and write data/using uart1 serial port to send and receive data/using camera to take photos or videos. |
C++ classification sample | c++ | >=5.0.4 | Use the googlenet/ResNet-50 model to classify and infer the input data. Contains multiple feature samples such as pictures/videos/dynamic batch/multi-batch/video stream/universal camera. |
C++ detection sample | c++ | >=5.0.4 | Use the object detection/yolov3/yolov4/vgg_ssd/faster_rcnn model to detect the input data. Contains various feature samples such as general picture/universal video//video stream/universal camera. |
C++ natural language processing sample | c++ | >=5.0.4 | Use the nlp model to reason about the input data. |
C++ other sample | c++ | >=5.0.4 | Other model reasoning examples, including black and white image coloring, super-segmentation, image enhancement, etc. |
C++ multithreading sample | c++ | >=5.0.4 | Use yolov3, object detection and other models to perform multi-threaded inference examples on input data. |
C++ user contribution sample | c++ | >=5.0.4 | Inference sample contributed by users. |
Python classification sample | python | >=5.0.4 | Use the googlenet/inceptionv3/vgg16 model to classify and infer the input data. |
python detection sample | python | >=5.0.4 | Use the googlenet/inceptionv3/vgg16 model to classify and infer the input data. |
Python segmentation sample | python | >=5.0.4 | Use the segmentation model to segment the input image. |
python natural language processing sample | python | >=5.0.4 | Use the nlp model to reason about the input data. |
python other sample | python | >=5.0.4 | Other model reasoning sample, including black and white image coloring, image restoration, etc. |
End-to-end sample from training to inference | python | >=5.0.4 | End-to-end sample guidance from training to deployment, including guidance on mask recognition, garbage classification, cat and dog battles, etc. |
Python industry sample | python | >=5.0.4 | For more complex samples, combine hardware or use multi-model and multi-threaded samples. Such as removing the designated foreground target sample of the image, the robot arm sample, etc. |
Python user contribution sample | python | >=5.0.4 | Inference sample contributed by users. |
Deploy and run a sample by referring to the corresponding README file.
Obtain related documentation at Ascend Documentation.
Get support from our developer community. Find answers and talk to other developers in our forum.
Ascend website: https://www.huaweicloud.com/intl/en-us/ascend/home.html
Ascend forum: https://forum.huawei.com/enterprise/en/forum-100504.html
Welcome to contribute to Ascend Samples. For more details, please refer to our Contribution Wiki.