🌲 TTool
is developed at the Laboratory for Timber Construction (director: Prof.Yves Weinand) with the support of the EPFL Center for Imaging and the SCITAS, at EPFL, Lausanne, Switzerland. The project is part of the Augmented Carpentry Research.
🪚 TTool
is an open-source AI-powered and supervised 6DoF detector for monocular camera. It is developed in C++ and for UNIX systems to allow accurate end-effectors detection during wood-working operations such as cutting, drilling, sawing and screwing with multiple tools. This is a fundamental component of any subtractive AR fabrication system since you can for instance, calculate and give users feedback on the correct orientation and depth to start and finish a hole or a cut.
🖧 TTool
is a AI-6DoF pose detector that recognizes automatically tools and allows the user to input an initial pose via an AR manipulator. The pose is then refined by a modified version of SLET (checkout our changelog) and visualized as a projection onto the camera feed.
↳ TTool
can be imported as a C++ API in a third project or used as an executable. It is tailored to our specific use case in timber carpentry but see the Caveats section below to adapt it to your use case.
🚀 For a quick hands-on start or more details, check out our Wiki.
TTool
is published in a MDPI Journal Paper of Applied Sciences that you can find here.
@article{Settimi2024,
title = {TTool: A Supervised Artificial Intelligence-Assisted Visual Pose Detector for Tool Heads in Augmented Reality Woodworking},
volume = {14},
ISSN = {2076-3417},
url = {http://dx.doi.org/10.3390/app14073011},
DOI = {10.3390/app14073011},
number = {7},
journal = {Applied Sciences},
publisher = {MDPI AG},
author = {Settimi, Andrea and Chutisilp, Naravich and Aymanns, Florian and Gamerro, Julien and Weinand, Yves},
year = {2024},
month = apr,
pages = {3011}
}
a
: the ML classifier detects the tool type from the camera feed and loads the corresponding 3D model.b
: the user inputs an initial pose of the tool via an AR manipulator.c
: the pose is refined with an edge-based algorithm.d
: the pose is projected onto the camera buffer and displayed to the user.e
: the user can now start the operation guided by computed feedback.
On the left, the user can select the tool type and input an initial pose. On the right, the pose is refined and projected onto the camera feed. The digital twin between the aligned model and the chainsaw plate (or any other tool) is preserved even when occuluded and inside the wood.
TTool was tailored to our specific use case. If you want to adapt it to your use case, you will need to change the following files:
CMakeLists.txt
: comment the lineinclude(cmake/dataset.cmake)
, it won't use zenodo for the models, but you will have to provide the models yourself, se the wiki on how to do it.assets/config.yaml
: list the models you want to use, and their path by replacing these lines:Lines 57 to 66 in b357383
Lines 67 to 76 in b357383
ML classifier
: to adapt the ML classifier to your use case, you will need to train your own model. We have a template in this repo.
This project was possible thanks to technical support and counseling of the EPFL Center for Imaging in the person of Florian Aymanns. We also would like to acknowledge the help of Nicolas Richart in the CMake project and CI development of TTool Check out their GitHub Organization to discover other nice projects they are helping building!