diff --git a/docs/components/.pages b/docs/components/.pages
index 766bda9..8981c8b 100644
--- a/docs/components/.pages
+++ b/docs/components/.pages
@@ -5,6 +5,6 @@ nav:
- Kobo: kobo
- Payment Gateway: pg.md
- Country Report: reporting
- - Deduplication: hde
+ - Deduplication: hde.md
- RapidPro: rapidpro
- Workspace: workspace.md
diff --git a/docs/components/hde/deduplication_description.md b/docs/components/hde.md
similarity index 64%
rename from docs/components/hde/deduplication_description.md
rename to docs/components/hde.md
index 3753902..4cb4d96 100644
--- a/docs/components/hde/deduplication_description.md
+++ b/docs/components/hde.md
@@ -1 +1,5 @@
+# Deduplication Engine
+
It provides users with powerful capabilities to identify and remove duplicate records within the system, ensuring that data remains clean, consistent, and reliable.
+
+{:target="_blank"}
diff --git a/docs/components/hde/.pages b/docs/components/hde/.pages
deleted file mode 100644
index 8617882..0000000
--- a/docs/components/hde/.pages
+++ /dev/null
@@ -1,8 +0,0 @@
-nav:
- - index.md
- - Setup: setup.md
- - REST API: API.md
- - Demo Application: demo.md
- - Duplicated Image Detection: did
- - Troubleshooting: troubleshooting.md
- - Development: development.md
diff --git a/docs/components/hde/API.md b/docs/components/hde/API.md
deleted file mode 100644
index d6172d1..0000000
--- a/docs/components/hde/API.md
+++ /dev/null
@@ -1,17 +0,0 @@
-The application provides comprehensive API documentation to facilitate ease of use and integration. API documentation is available via two main interfaces:
-
-#### Swagger UI
-An interactive interface that allows users to explore and test the API endpoints. It provides detailed information about the available endpoints, their parameters, and response formats. Users can input data and execute requests directly from the interface.
-
-URL: `http://localhost:8000/api/rest/swagger/`
-
-#### Redoc
-A static, beautifully rendered documentation interface that offers a more structured and user-friendly presentation of the API. It includes comprehensive details about each endpoint, including descriptions, parameters, and example requests and responses.
-
-URL: `http://localhost:8000/api/rest/redoc/`
-
-
-These interfaces ensure that developers have all the necessary information to effectively utilize the API, enabling seamless integration and interaction with the application’s features.
-
-!!! warning "Environment-Specific URLs"
- The URLs will vary depending on the server where it is hosted. If the server is hosted elsewhere except for the local machine, replace **http://localhost:8000** with the server's domain URL.
diff --git a/docs/components/hde/demo.md b/docs/components/hde/demo.md
deleted file mode 100644
index 2443b9f..0000000
--- a/docs/components/hde/demo.md
+++ /dev/null
@@ -1,76 +0,0 @@
-To help you explore the functionality of this project, a demo server can be run locally using the provided sample data. This demo server includes pre-configured settings and sample records to allow for a comprehensive overview of the application's features without needing to configure everything from scratch.
-
-
-## Running the Demo Server Locally
-
-To set up and start the demo server locally, use the following command:
-
- docker compose -f tests/extras/demoapp/compose.yml up --build
-
-This command will build and launch all necessary containers for the demo environment, allowing you to see how different components of the system interact. Once everything is running, you can access the demo server's admin panel to manage and configure various settings within the application.
-
-## Accessing the Admin Panel
-
-The admin panel is accessible via the following URL in your browser, using the credentials below:
-
-- URL: **http://localhost:8000/admin**
-- Username: **adm@hde.org**
-- Password: **123**
-
-
-## API Interaction
-
-To further understand how the API works and how different endpoints can be used, there are scripts available for API interaction. These scripts are located in the `tests/extras/demoapp/scripts` directory.
-
-### Prerequisites
-
-To use these scripts, ensure that the following tools are installed:
-
-- [httpie](https://httpie.io/): A command-line HTTP client, used for making API requests in a more readable format compared to traditional curl.
-- [jq](https://jqlang.github.io/jq/) : A lightweight and flexible command-line JSON processor that allows you to parse and manipulate JSON responses from API endpoints.
-
-### Scripts Overview
-
-#### Configuration Scripts
-
-Configuration scripts are used to set up the environment for the API interactions. These scripts hold internal settings and functions that are shared across multiple API interaction scripts, making it easier to reuse common functionality and standardize configuration.
-
-| Name | Arguments | Description |
-|-----------------------|-----------|-------------------------------------------------|
-| .vars | - | Contains configuration variables |
-| .common | - | Contains common functions used by other scripts |
-
-
-#### Public Scripts
-
-These scripts help manage specific parameters for API interactions, allowing for easy setup and modification of variables that will be used in other commands.
-
-| Name | Arguments | Description |
-|-----------------------|----------------------|---------------------------|
-| use_base_url | base url | Sets base url |
-| use_auth_token | auth token | Sets authentication token |
-| use_deduplication_set | deduplication set id | Sets deduplication set id |
-
-
-#### API Interaction Scripts
-
-These scripts are used to interact directly with the API endpoints, performing various operations like creating deduplication sets, uploading images, starting the deduplication process, and retrieving results.
-
-| Name | Arguments | Description |
-|---------------------------|-----------------------------------------|---------------------------------------------|
-| create_deduplication_set | reference_pk | Creates new deduplication set |
-| create_image | filename | Creates image in deduplication set |
-| ignore | first reference pk, second reference pk | Makes API ignore specific reference pk pair |
-| process_deduplication_set | - | Starts deduplication process |
-| show_deduplication_set | - | Shows deduplication set data |
-| show_duplicates | - | Shows duplicates found in deduplication set |
-
-
-#### Test Case Scripts
-
-Test case scripts are designed to automate end-to-end testing scenarios, making it easy to validate the deduplication functionality.
-
-| Name | Arguments | Description |
-|------------------|--------------|--------------------------------------------------------------------------------------------------------------------------------|
-| base_case | reference pk | Creates deduplication set, adds images to it and runs deduplication process |
-| all_ignored_case | reference pk | Creates deduplication set, adds images to it, adds all possible reference pk pairs to ignored pairs and shows duplicates found |
\ No newline at end of file
diff --git a/docs/components/hde/development.md b/docs/components/hde/development.md
deleted file mode 100644
index 56e1b35..0000000
--- a/docs/components/hde/development.md
+++ /dev/null
@@ -1,18 +0,0 @@
-## Local Development
-
-To develop the service locally, you can utilize the provided `compose.yml` file. This configuration file defines all the necessary services, including the primary application and its dependencies, to create a consistent development environment. By using **Docker Compose**, you can effortlessly spin up the entire application stack, ensuring that all components work seamlessly together.
-
-To build and start the service, along with its dependencies, run the following command:
-
- docker compose up --build
-
-
-## Running Tests
-To ensure that the service is working correctly, a comprehensive suite of tests is available. You can run these tests execute the following command:
-
- docker compose run --rm backend pytest tests -v --create-db
-
-
-## Viewing Coverage Report
-
-After running the tests, a coverage report will be generated. This report helps in assessing how much of the code is covered by the tests, highlighting any areas that may need additional testing. You can find the coverage report in the `~build/coverage` directory.
diff --git a/docs/components/hde/did/.pages b/docs/components/hde/did/.pages
deleted file mode 100644
index 3287baa..0000000
--- a/docs/components/hde/did/.pages
+++ /dev/null
@@ -1,5 +0,0 @@
-nav:
- - Image Processing and Duplicate Detection: index.md
- - Configuration: config.md
- - workflow.md
-
\ No newline at end of file
diff --git a/docs/components/hde/did/config.md b/docs/components/hde/did/config.md
deleted file mode 100644
index 9a721dc..0000000
--- a/docs/components/hde/did/config.md
+++ /dev/null
@@ -1,68 +0,0 @@
-The configuration can be managed directly through the **admin panel**, which provides a simple way to modify settings without changing the codebase. Navigate to:
-
- Home › Constance › Config
-
-Here, you will find all the configurable settings that affect the behavior of the system, allowing for quick adjustments and better control over application behavior.
-
-## Deep neural networks (DNN)
-
-The deep learning component of the system is crucial for performing advanced inference tasks, including **face detection**, **face recognition**, and **finding duplicate images** using a pre-trained model. These tasks are fundamental to ensuring the accuracy and efficiency of the system in identifying and managing images.
-
-This component relies on **Convolutional Neural Networks (CNNs)**, a type of deep learning model particularly well-suited for processing visual data. CNNs are used to automatically extract relevant features from images, such as facial landmarks and distinctive patterns, without the need for manual feature engineering.
-
-### DNN_BACKEND
-
-Specifies the computation backend to be used by [OpenCV](https://github.com/opencv/opencv) library for deep learning inference.
-
-### DNN_TARGET
-
-Specifies the target device on which [OpenCV](https://github.com/opencv/opencv) library will perform the deep learning computations.
-
-
-## Face Detection
-
-This component is responsible for locating and identifying faces in images. It uses advanced deep learning algorithms to scan images and detect the regions that contain human faces. This section outlines the key configuration parameters that influence how the face detection model processes input images and optimizes detection results.
-
-### BLOB_FROM_IMAGE_SCALE_FACTOR
-
-Specifies the scaling factor applied to all pixel values when converting an image to a blob. Mostly it equals 1.0 for no scaling or 1.0/255.0 and normalizing to the [0, 1] range.
-
-Remember that scaling factor is also applied to mean values. Both scaling factor and mean values must be the same for the training and inference to get the correct results.
-
-### BLOB_FROM_IMAGE_MEAN_VALUES
-
-Specifies the mean BGR values used in image preprocessing to normalize pixel values by subtracting the mean values of the training dataset. This helps in reducing model bias and improving accuracy.
-
-The specified mean values are subtracted from each channel (Blue, Green, Red) of the input image.
-
-Remember that mean values are also applied to scaling factor. Both scaling factor and mean values must be the same for the training and inference to get the correct results.
-
-### FACE_DETECTION_CONFIDENCE
-
-Specifies the minimum confidence score required for a detected face to be considered valid. Detections with confidence scores below this threshold are discarded as likely false positives.
-
-### NMS_THRESHOLD
-
-Specifies the Intersection over Union (IoU) threshold used in Non-Maximum Suppression (NMS) to filter out overlapping bounding boxes. If the IoU between two boxes exceeds this threshold, the box with the lower confidence score is suppressed. Lower values result in fewer, more distinct boxes; higher values allow more overlapping boxes to remain.
-
-## Face Recognition
-
-This component builds on face detection to identify and differentiate between individual faces. This involves generating face encodings, which are numerical representations of the unique facial features used for recognition. These encodings can then be compared to determine if two images contain the same person or to find matches in a database of known faces.
-
-### FACE_ENCODINGS_NUM_JITTERS
-
-Specifies the number of times to re-sample the face when calculating the encoding. Higher values increase accuracy but are computationally more expensive and slower. For example, setting 'num_jitters' to 100 makes the process 100 times slower.
-
-### FACE_ENCODINGS_MODEL
-
-Specifies the model type used for encoding face landmarks. It can be either 'small' which is faster and only 5 key facial landmarks, or 'large' which is more precise and identifies 68 key facial landmarks but requires more computational resources.
-
-
-## Duplicate Finder
-
-This component is responsible for identifying duplicate images in the system by comparing face embeddings. These embeddings are numerical representations of facial features generated during the face recognition process. By calculating the distance between the embeddings of different images, the system can determine whether two images contain the same person, helping in the identification and removal of duplicates or grouping similar faces together.
-
-### FACE_DISTANCE_THRESHOLD
-
-Specifies the maximum allowable distance between two face embeddings for them to be considered a match. It helps determine if two faces belong to the same person by setting a threshold for similarity. Lower values result in stricter matching, while higher values allow for more lenient matches.
-
diff --git a/docs/components/hde/did/index.md b/docs/components/hde/did/index.md
deleted file mode 100644
index d11c340..0000000
--- a/docs/components/hde/did/index.md
+++ /dev/null
@@ -1 +0,0 @@
-This feature consists of several interconnected components that work together to process images, detect and recognize faces, and find duplicate images using deep learning techniques.
\ No newline at end of file
diff --git a/docs/components/hde/did/workflow.md b/docs/components/hde/did/workflow.md
deleted file mode 100644
index c7f513e..0000000
--- a/docs/components/hde/did/workflow.md
+++ /dev/null
@@ -1,102 +0,0 @@
----
-tags:
- - Deduplication
----
-
-# Image Processing and Duplicate Detection
-
-The workflow uses pre-trained models from [OpenCV](https://opencv.org/) for face detection and [dlib](http://dlib.net/) for face recognition and landmark detection. This setup provides a fast, reliable solution for real-time applications, without requiring the training of models from scratch. OpenCV handles face detection using a Caffe-based model, while **dlib**, accessed through the [face_recognition](https://pypi.org/project/face-recognition/) library, manages recognition and duplicate identification.
-
-Future updates will involve custom-trained models to further improve performance.
-
-## Inference Mode Operation
-
-This application operates entirely in inference mode, relying on pre-trained models for both face detection and recognition tasks. **OpenCV** handles face detection, and **face_recognition**, a Python wrapper for **dlib**, performs face recognition and duplicate identification. This approach ensures efficient, real-time processing without the need for additional training, allowing the application to quickly deploy its capabilities.
-
-- **OpenCV**: Optimized for fast face detection, ideal for real-time image and video applications.
-- **dlib's face_recognition**: Focuses on generating face embeddings for comparison, providing high accuracy in identification.
-
-By combining OpenCV for detection and dlib for recognition, the system offers a balance of speed and precision.
-
-### Pre-Trained Models Storage
-
-- **OpenCV** uses a pre-trained [Caffe model](https://caffe.berkeleyvision.org/) stored in Azure Blob Storage, automatically downloaded at application startup.
-- **face_recognition** utilizes a pre-trained [dlib model](https://pypi.org/project/face_recognition_models/) stored locally within the container’s library directory.
-
-Administrators can manually update the **Caffe model** via the admin panel, allowing flexible updates or new model versions without altering the application code.
-
----
-
-## Face Detection and Recognition Models
-
-### OpenCV Model Details
-
-OpenCV powers the face detection component using a pre-trained model designed for real-time performance.
-
-#### Model Components
-
-- **deploy.prototxt**: Defines the network architecture and parameters for model execution.
-- **res10_300x300_ssd_iter_140000.caffemodel**: Contains trained weights, generated after 140,000 iterations using the **Caffe** framework.
-
-#### Model Architecture
-
-- **Res10 Architecture**: A lightweight model that balances speed and accuracy, perfect for real-time detection.
-- **300x300 Input Resolution**: Optimized for face detection at this resolution, ensuring a balance between detail and efficiency.
-- **SSD (Single Shot MultiBox Detector)**: A method that predicts bounding boxes and confidence scores in a single pass, allowing rapid detection of multiple faces in a single image.
-
-### Dlib Model Details
-
-The **dlib** models used for recognition and facial landmark detection include:
-
-1. **dlib_face_recognition_resnet_model_v1.dat**
-
- A modified **ResNet-34** model generating **128-dimensional face embeddings** for face recognition, achieving **99.38% accuracy** on the LFW benchmark.
-
-2. **mmod_human_face_detector.dat**
- A **CNN-based Max-Margin Object Detector (MMOD)** for accurate face detection, especially under difficult conditions like varied orientations or lighting.
-
-3. **shape_predictor_5_face_landmarks.dat**
- Detects **5 key facial landmarks** (eye corners and nose base), optimized for fast face alignment.
-
-4. **shape_predictor_68_face_landmarks.dat**
- Detects **68 facial landmarks** (eyes, nose, mouth, jawline), used for more detailed facial alignment and analysis.
-
----
-
-## Workflow Diagram
-
-The workflow diagram illustrates the overall process of image processing and duplicate detection. **OpenCV** is used for face detection, while **face_recognition** (built on **dlib**) handles face recognition and duplicate identification.
-
-```mermaid
-flowchart LR
- subgraph ImageProcessing[Image Processing]
- direction LR
-
- subgraph FaceDetection[Face Detection]
-
- subgraph DNNManager[DNN Manager]
- direction TB
- load_model[Load Caffe Model] -- computation backend\ntarget device --> set_preferences[Set Preferences]
- end
-
- DNNManager --> run_model
-
- direction TB
- load_image[Load Image] -- decoded image as 3D numpy array\n(height, width, channels of BlueGreeRed color space) --> prepare_image[Prepare Image] -- blob 4D tensor\n(normalized size, use scale factor and means) --> run_model[Run Model] -- shape (1, 1, N, 7),\n1 image\nN is the number of detected faces\neach face is described by the 7 detection values--> filter_results[Filter Results] -- confidence is above the minimum threshold,\nNMS to suppress overlapping bounding boxes --> return_detections[Return Detections]
- end
-
- subgraph FaceRecognition[Face Recognition]
- direction TB
- load_image_[Load Image] --> detect_faces[Detect Faces] -- detected face regions\nnumber of times to re-sample the face\nkey facial landmarks --> generate_encodings[Generate Encodings] -- numerical representations of the facial features\n(face's geometry and appearance) --> save_encodings[Save Encodings]
- end
- end
-
- subgraph DuplicateFinder[Duplicate Finder]
- direction TB
- load_encodings[Load Encodings] --> compare_encodings[Compare Encodings] -- face distance less then threshold --> return_duplicates[Return Duplicates]
- end
-
- ImageProcessing --> DuplicateFinder
- FaceDetection --> FaceRecognition
-
-```
diff --git a/docs/components/hde/index.md b/docs/components/hde/index.md
deleted file mode 100644
index 09b366b..0000000
--- a/docs/components/hde/index.md
+++ /dev/null
@@ -1,21 +0,0 @@
-# Deduplication
-
-Deduplication Engine component of the HOPE ecosystem.
-
---8<-- "components/hde/deduplication_description.md"
-
-## Repository
-
-
-
-
-## Features
-
-- [Duplicated Image Detection](did/index.md)
-
-
-## Help
-
-**Got a question**? We got answers.
-
-File a GitHub [issue](https://github.com/unicef/hope-dedup-engine/issues)
diff --git a/docs/components/hde/setup.md b/docs/components/hde/setup.md
deleted file mode 100644
index 02db6e0..0000000
--- a/docs/components/hde/setup.md
+++ /dev/null
@@ -1,126 +0,0 @@
----
-tags:
- - Deduplication
----
-
-## Prerequisites
-
-This project utilizes [UV](https://docs.astral.sh/uv/) as the package manager for managing Python dependencies and environments.
-
-To successfully set up and run this project, ensure that you have the following components in place:
-
-- **Postgres Database (v14+)**: A PostgreSQL database instance is required to store application data. Ensure that version 14 or newer is available and accessible.
-- **Redis Server**: Redis is used for caching and managing task queues. Ensure you have a running Redis server.
-- **Celery Worker(s)**: Celery is used for handling asynchronous tasks in the application. One or more workers are needed to process these tasks.
-- **Celery Beat**: Celery Beat is used for scheduling periodic tasks. Ensure that Celery Beat is configured and running.
-- **Azure Blob Storage Account(s)**: Azure Blob Storage is utilized for storing application files and media. Make sure you have access to one or more Azure Blob Storage accounts for file management.
-
-The code for this project is encapsulated within a Docker image, which provides an isolated and consistent environment for running the application. This Docker image is hosted on [Docker Hub](https://hub.docker.com/r/unicef/hope-dedupe-engine/), allowing easy access and deployment.
-
-## Environment Configuration
-
-Essential steps for verifying and configuring the environment settings required to run the project are provided. Instructions include displaying the current configuration, checking for missing variables, and ensuring all required settings are properly defined. Detailed descriptions of each variable are also available.
-
-### Display the Current Configuration
-
- $ docker run -it -t unicef/hope-dedupe-engine:release-0.1 django-admin env
-
-### Mandatory Environment Variables
-Check Environment Variables
-
- $ docker run -it -t unicef/hope-dedupe-engine:release-0.1 django-admin env --check
-
-Ensure the following environment variables are properly configured:
-
- DATABASE_URL
- SECRET_KEY
- CACHE_URL
- CELERY_BROKER_URL
- MEDIA_ROOT
- STATIC_ROOT
- DEFAULT_ROOT
- FILE_STORAGE_DNN
- FILE_STORAGE_HOPE
- FILE_STORAGE_STATIC
- FILE_STORAGE_MEDIA
-
-### Variables Breakdown
-
-Detailed information about the required environment variables is provided for clarity and proper configuration.
-
-#### Operational
-
-##### DATABASE_URL
-The URL for the database connection. *Example:* `postgres://hde:password@db:5432/hope_dedupe_engine`
-
-##### SECRET_KEY
-A secret key for the Django installation. *Example:* `django-insecure-pretty-strong`
-
-##### CACHE_URL
-The URL for the cache server. *Example:* `redis://redis:6379/1`
-
-##### CELERY_BROKER_URL
-The URL for the Celery broker. *Example:* `redis://redis:6379/9`
-
-#### Root directories
-
-##### DEFAULT_ROOT
-The root directory for locally stored files. *Example:* `/var/hope_dedupe_engine/default`
-
-##### MEDIA_ROOT
-The root directory for media files. *Example:* `/var/hope_dedupe_engine/media`
-
-##### STATIC_ROOT
-The root directory for static files. *Example:* `/var/hope_dedupe_engine/static`
-
-#### Storages
-
-##### FILE_STORAGE_DEFAULT
-This backend is used for storing locally downloaded DNN model files and encoded data.
- ```
- FILE_STORAGE_DEFAULT=django.core.files.storage.FileSystemStorage
- ```
-##### FILE_STORAGE_DNN
-This backend is dedicated to storing DNN model files. Ensure that the following two files are present in this storage:
-
-1. *deploy.prototxt.txt*: Defines the model architecture.
-2. *res10_300x300_ssd_iter_140000.caffemodel*: Contains the pre-trained model weights.
-
-The current process involves downloading files from a [GitHub repository](https://github.com/sr6033/face-detection-with-OpenCV-and-DNN) and saving them to this specific Azure Blob Storage using command `django-admin upgrade --with-dnn-setup`, or the specialized`django-admin dnnsetup` command .
-In the future, an automated pipeline related to model training could handle file updates.
-
-The storage configuration is as follows:
-```
-FILE_STORAGE_DNN="storages.backends.azure_storage.AzureStorage?account_name=&account_key=&overwrite_files=true&azure_container=dnn"
-```
-
-##### FILE_STORAGE_HOPE
-This backend is used for storing HOPE dataset images. It should be configured as read-only for the service.
- ```
- FILE_STORAGE_HOPE="storages.backends.azure_storage.AzureStorage?account_name=&account_key=&azure_container=hope"
- ```
-##### FILE_STORAGE_MEDIA
-This backend is used for storing media files.
-
-##### FILE_STORAGE_STATIC
-This backend is used for storing static files, such as CSS, JavaScript, and images.
-
-## Running the Application
-
-To get the application up and running, follow the steps outlined below. The first command will set up the initial configuration, while the subsequent commands will start the server and related support services, including worker processes and task scheduling.
-
-### Initial Setup
-
-Before starting the application, perform the initial setup using the following command. This will configure the necessary environment settings and prepare the application for runtime:
-
- docker run -d -t unicef/hope-dedupe-engine:release-0.1 setup
-
-### Starting the Server and Services
-
-Once the initial setup is complete, run the commands below to start the server and the required background services:
-
- docker run -d -t unicef/hope-dedupe-engine:release-0.1 run
- docker run -d -t unicef/hope-dedupe-engine:release-0.1 worker
- docker run -d -t unicef/hope-dedupe-engine:release-0.1 beat
-
-These commands will ensure that the application server, worker processes, and task scheduler are all running in the background, allowing the full functionality of the application to be available.
diff --git a/docs/components/hde/setup/config.md b/docs/components/hde/setup/config.md
deleted file mode 100644
index e69de29..0000000
diff --git a/docs/components/hde/setup/docker.md b/docs/components/hde/setup/docker.md
deleted file mode 100644
index e69de29..0000000
diff --git a/docs/components/hde/setup/virtualenv.md b/docs/components/hde/setup/virtualenv.md
deleted file mode 100644
index e69de29..0000000
diff --git a/docs/components/hde/troubleshooting.md b/docs/components/hde/troubleshooting.md
deleted file mode 100644
index d4be2ab..0000000
--- a/docs/components/hde/troubleshooting.md
+++ /dev/null
@@ -1,6 +0,0 @@
-If you encounter issues while running the service, the **admin panel** can be a useful tool for diagnosing and resolving problems. The admin panel provides access to various configurations, logs, and status indicators that can help identify potential causes of issues.
-
-To efficiently track and monitor errors within the application, **Sentry** is integrated as the primary tool for error logging and alerting.
-
-!!! warning "Sentry environment"
- For Sentry to work correctly, ensure that the **SENTRY_DSN** environment variable is set.
diff --git a/docs/components/index.md b/docs/components/index.md
index 019894a..42e17a1 100644
--- a/docs/components/index.md
+++ b/docs/components/index.md
@@ -14,7 +14,7 @@ systems and platforms, ensuring smooth workflows and easy expansion as your need
- [Kobo](kobo/index.md)
-- [DeduplicationEngine](hde/index.md)
+- [DeduplicationEngine](hde.md)
- [Country Report](reporting/index.md)
diff --git a/docs/glossary/terms/process.md b/docs/glossary/terms/process.md
index a9f612d..4b3f9b4 100644
--- a/docs/glossary/terms/process.md
+++ b/docs/glossary/terms/process.md
@@ -26,6 +26,6 @@ Sometimes used as a term pre-intervention to talk about who we are targeting.