- Change from standard rails active job with resque to Sidekiq for background jobs
- Adding Go app that consumes Redis queue for creating both Chat and Messages and remove their implementation from the rails services
- Add elastic search partial matching
- Add a new endpoint to re-index the messages created from the go app (because elastic search callbacks won't be triggered from the database level changes made by go app)
- Replace the pessimistic locking mechanism with optimistic locking using the default lock_verison for rails and handle stale object errors with retry and hopefully, 1 retry is enough
- Use redis db in the creation process for chats and messages, keep all the interatcion with redis for better avaibility and then sync the values from redis cache to the main sql db every 1 hour
- Minor Code enhancements
- Creating new application, creates a new record in the db and return application number
- The user uses this number to create a new chat, system check if this application has a value in redis database, if not create a new one with
key : applicaiton.number, value: application.chats_count +1
and then use the same incremtnal method at any chat creation - The user tries to create new message, we do the same as step 2 checks redis db for a matching record of the chat, if not create a new one with
key : chat.number, value: chat.messags_count +1
and then use the same incremtnal method at any message creation - Creating new chat or new message,creates a new job that enqueud in redis queue and then consumed from Go app and inserted into the DB from the Go app
- On successful insertion, the Go app calls an Elasticsearch re-indexing endpoint in rails app
- Each 1 hour run a rake task that sync the sql database with redis database
- Good news! the project is dockerized so count to 3 and this section will be done
- After forking the repo, run
docker-compose up
- When the server is up and running use localhost:3000 to check that everything looks good
- API DOCUMENTATION if you are not familiar with Swagger Api Documentation (hopefully you are because this Introductory video is 10 mins long :) )
-
Main entites:
- Application
- Chat
- Messages
-
Application has many chats
-
Chats belongs to application and has many messages
-
Messages belongs to chat
-
Users are referred as (applicaiton)
-
Creating apps is open for public usage without any authentication or authorization
-
Applications are identified by a token that is used for creating chats and messages
-
Creating Chats by application token
-
Each chat has a inceremtal identifier number that is used for creating its messages
-
Creating Messages by both application token and chat number
-
Anyone with application token, chat number can search for messages by keyword or index all messages in the chat
-
Real DB ids must be obscure/hidden
-
Message searching should be with ElasticSearch
-
Responses must have the identifer number even if the object will be queued for a delayed creation
Technical Debts *
-
When exactly to increment the counter coloumns in both chats and applications, or just use a
before_create
callback:- Requirement [14] enforces having a pre-defined number sent to the consumer which will be supplied to the creation job to be used for object creation
- This raises a need for a consistent rollback mechanism is creation failed to keep the counter coloum data integrant
-
Using Optimistic vs Pessmistic locks while updating the counter:
- Even that optimisitic has better performance but it will raise error in the background that rolls back the declined changes and as we need our messages to be created, we chose pessmitic while we chose our main requirement over some performance boost, reference
-
Using locks or not in decrementing the counter On resource deletion
# Not using locks as it's less likely to conflict
def decrement_chats_counter
application.decrement!(:chats_count)
end
- Ruby
- Rails
- Redis
- Resque worker
- Elasticsearch
- Go App
- Rspecs on elastic search
- make better indices
- benchmarking