Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
surendratiwari3 authored Mar 11, 2024
1 parent a4789c4 commit a705e36
Showing 1 changed file with 25 additions and 8 deletions.
33 changes: 25 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -199,7 +199,7 @@ mockery --inpackage --all

## Benchmarks

### Benchmark Configuration
### No OPs BenchMark

All these benchmarks are done in a notebook with these configuration:

Expand All @@ -210,21 +210,38 @@ Memory: 8 GB 1600 MHz DDR3
```
The jobs are almost no-op jobs: they simply return nil. Rabbitmq , Consumer and Publisher running on same server

### RabbitMQ to MongoDB Data Processing Performance Test
#### Conclusion

This repository contains a performance test script for consuming data from RabbitMQ and inserting it into MongoDB with a uniform structure. The goal of this test is to ensure that the system can handle a high message throughput rate while consuming and processing more than 10 lakh (10 million) data records within 5 minutes, with an acknowledgment rate of more than 12k messages per second and same data stored in MongoDb. The test is conducted on a MacBook with 16GB RAM, and both MongoDB and RabbitMQ are hosted locally.
We have acheived benchmarking for 50 publisher publishing request and 1 consumer worker consuming the request at speed of 7000 request per second (concurrency=10 and PrefetchCount=100). If you want to achieve more throughput concurrency can be increased to any extent.

This screenshot provides a reference for the data acknowledgment rate from RabbitMQ and the corresponding data stored in MongoDB during the performance test.
### Task with MongoDB Data Insert Performance Test

![Ack Rate](https://github.com/manishjha1991/paota/blob/performance-mongodb-ack-rate/docs/images/ackrate.png?raw=true)
This performance test was conducted to evaluate the throughput and processing capabilities of the system using Paota to consume data from RabbitMQ and insert it into MongoDB.

#### Test Setup

```bash
System Specs: MacBook with 16GB RAM
Local Hosted Services: RabbitMQ, MongoDB
```

#### Results

![MonGoDb](https://github.com/manishjha1991/paota/blob/performance-mongodb-ack-rate/docs/images/mongodbRecord.png?raw=true)
##### Message Throughput
Throughput Rate: 12k messages per second.

### Conclusion
We have acheived benchmarking for 50 publisher publishing request and 1 consumer worker consuming the request at speed of 7000 request per second (concurrency=10 and PrefetchCount=100). If you want to achieve more throughput concurrency can be increased to any extent.
##### Data Processing
Total Records Processed: 10 Lakh data records.

##### Processing Time: Completed within 5 minutes.

##### Acknowledgment Rate: More than 12k messages per second.

![Ack Rate](https://github.com/manishjha1991/paota/blob/performance-mongodb-ack-rate/docs/images/ackrate.png?raw=true)

##### Data Integrity: Data stored in MongoDB matches the data consumed from RabbitMQ.

![MonGoDb](https://github.com/manishjha1991/paota/blob/performance-mongodb-ack-rate/docs/images/mongodbRecord.png?raw=true)

Thank you for flying Paota!

Expand Down

0 comments on commit a705e36

Please sign in to comment.