A live demo of the application can be found at invoice.andreigatej.dev.
An application meant to facilitate the process of writing up a traditional invoice. Keeping track of goods sent and services provided should be a computer's job.
This application is now more of a learning project or a playground because here is where I'm try out new things. It all started in 2019 when I took part in a national programming competition, InfoEducatie, where I took second place(the project's page can be found here).
-
Clone the repo
git clone https://github.com/Andrei0872/invoice-app.git
-
Set up the
client
Install the required dependencies:
cd client/ && npm i && cd -
Set the variables in the
.env
file:cp client/.env.example client/.env
The
VUE_APP_CRONJOB_START_DATE
variable takes in a number which represents a date in milliseconds. If you set it toNULL
, the demo mode is deactivated. Even if activated, it does nothing except from logging out the user after some certain time. That's because this feature is synchronized with a cron job that runs on the server. You can read more about it here. -
Set up the
server
Install the required dependencies:
cd client/ && npm i && cd -
Set the variables in the
.env
file:cp server/.env.example server/.env
Create a key that will be used for creating access tokens:
openssl rand -base64 32 > ./server/.key
-
Boot up the Docker containers that make up the application
# Add the `-d` flag in order to run the containers in the background. docker-compose -f docker-compose.yml --env-file ./server/.env -p "$(basename $(pwd))_DEV" up
If you want to stop and remove the containers:
docker-compose -f docker-compose.yml --env-file ./server/.env -p "$(basename $(pwd))_DEV" down
You can debug the
server
container just by commenting out a line and uncommenting another in thedocker-compose.yml
file:server: command: npm run dev:debug # command: npm run dev
After that, all you have to do is to run the
docker-compose up
command from above once again and then pressF5
in VS Code, while making sure that the right configuration is selected. -
Useful commands
List the images associated with this project's containers:
# Assuming the name of the root directory includes the word'invoice'. docker image ls "*invoice*"
List the containers associated with this project(without using
docker-compose
):docker ps -a | egrep '*invoice*' | awk '{ print $1 }'
I wanted to find a way to seamlessly add features to this app, without having to ssh into my VPS to trigger a new deployment. So, I had a look at GitHub Actions and this tool has turned out to be exactly what I've been looking for. Now, whenever I push a commit into the master
branch or a PR is merged into master
, a deployment will be triggered automatically.
All this logic can be found in the autodeploy.yml
file.
Considering that this is a full-stack application, I didn't want to re-deploy the entire application when only one part included some new changes. For example, if the client
brings something new, then only the build
script of the Vue application should be run. Similarly, when the server
contains some updates, then only the server
container should be restarted.
In order to achieve that, I had to get familiar with a few Bash scripting concepts. This is the way the previously described logic is implemented at the moment:
-
whenever something is pushed into the
master
branch, theorigin/master
branch will be updated by using thegit fetch origin master
command; then, I will check whether theHEAD
ofmaster
is the same as theHEAD
oforigin/master
; the check is done by comparing the hashes of the two:if [ $(git log --format="%H" HEAD -1) == $(git log --format="%H" FETCH_HEAD -1) ]
where
FETCH_HEAD
points to the commits that had just been fetched as a result of issuing thegit fetch origin master
command. -
a check is done in order to determine if among the newly pushed commits there are some changes in the
/client
directory:git diff --pretty=%gd --stat HEAD FETCH_HEAD | grep -q client/ clientExitCode=$?
as a side note, if
grep
finds something, then the exit code will be0
(More about it here).f the check is positive, then the
build
command is run. -
then, a check is done to see if something new appeared in the
/server
directory; if the check is positive, then theserver
container is restarted:docker-compose -f docker-compose.prod.yml --env-file ./server/.env -p "$(basename $(pwd))_PROD" restart server
Another very interesting thing that I've learned is that commands can be run in parallel in a Bash script. For instance, if the newly added commits comprise changes made in both client
and server
directories, then I wouldn't want to wait until the Vue's build
command is ready before restarting the server
container. In order to prevent that, here is how I'm running the build
script:
if $shouldUpdateClient then
npm run build -- &
fi
if $shouldUpdateServer then
docker-compose -f docker-compose.prod.yml --env-file ./server/.env -p "$(basename $(pwd))_PROD" restart server
fi
wait
Undoubtedly, having to manually install the required dependencies in order to run the application locally is not the smoothest experience. For this reason, and not only, I decided to improve the DX by dockerizing the entire application.
During the process, I faced many challenges and solving them clarified a lot of things to me. One of the most interesting findings was how to make the server
container wait for the redis
and the db
containers until they're truly ready. For instance, it often occurred that the db
container had started, but it was not ready to accept connections and it consequentially caused errors for the server
container. This is how I overcame this issue:
db:
healthcheck:
test: "mysql -uroot -p$DB_ROOT_PASSWORD -e 'select 1;'"
interval: 1s
retries: 20
redis:
healthcheck:
test: "redis-cli ping"
interval: 1s
retries: 20
server:
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
Since the application is exposed to the world and since it's really just a playground where I could try new things, I didn't want the data to grow infinitely. To that end, I did some research and the best solution for this problem I found to be a cron job that will run a script periodically(e.g. 5 days) and that script will be responsible for cleaning up the accumulated data and eventually restart all the active sessions.
The script that will be run by the cron job is clean-up.sh
. Since I'm using Docker containers, the way to clear the data was to delete the volume associated with the db
container. Luckily, the docker-compose down -v
command does exactly that.
However, there was another tricky problem that occurred. Because I'm using JWT tokens for authentication and authorization, there's no state I'm keeping track of on the server, apart from the refresh tokens, which are stored in redis
. So the problem was that logged in users(which had the access tokens stored in LS) were able to use the application as usual, although everything had been cleaned up(including the users
table). The reason for that was the key which had been used to create the access token. In order to solve this, apart from cleaning up the data, I also changed that key, so that all the existing access tokens are invalidated.
This is the logic that I've followed:
docker-compose down -v \
&& openssl rand -base64 32 > ./server/.key \ # <- creating a new key!
&& docker-compose up -d
Since the beginning of this project, I was settled to learn as much as possible from it. That's why I went with pure SQL queries instead of using an ORM.
This opened the door to many learning opportunities. One of my favorite discoveries was the use of Stored Procedures and Triggers in MySQL
This application has a very special meaning to me. Apart from being my first serious application that I've built, it's also the first full-stack application that I've ever deployed.
I'm also seeing this small project as a reference for future projects, in the sense that it contains many starting points.
I had a great time learning more about how the Internet works and many other captivating aspects: web servers, DNS, ssh, cryptography, JWT.