This application can run on a single Docker host using docker-compose. (recommended for development environments). For production, see the cloudformation directory for AWS Elastic Container Service stack templates.
$ git clone https://github.com/LibraryOfCongress/concordia.git
$ cd concordia
$ docker-compose up
Browse to localhost
If you're intending to edit static resources, templates, etc. and would like to
enable Django's DEBUG mode ensure that your environment has DEBUG=true
set
before you run docker-compose up
for the app
container. The easiest way to
do this permanently is to add it to the .env
file:
$ echo DEBUG=true >> .env
You will likely want to run the Django development server on your localhost
instead of within a Docker container if you are working on the backend. This is
best done using the same pipenv
-based toolchain as the Docker deployments:
Python dependencies and virtual environment creation are handled by pipenv.
If you want to add a new Python package requirement to the application environment, it must be added to the Pipfile and the Pipfile.lock file. This can be done with the command:
$ pipenv install <package>
If the dependency you are installing is only of use for developers, mark it as
such using --dev
so it will not be deployed to servers — for example:
$ pipenv install --dev django-debug-toolbar
Both the Pipfile
and the Pipfile.lock
files must be committed to the source
code repository any time you change them to ensure that all testing uses the
same package versions which you used during development.
Instead of doing docker-compose up
as above, instead start everything except the app:
$ docker-compose up -d db redis importer
This will run the database in a container to ensure that it always matches the
expected version and configuration. If you want to reset the database, simply
delete the local container so it will be rebuilt the next time you run
docker-compose up
: docker-compose rm --stop db
.
These steps only need to be performed the first time you setup a fresh virtualenv environment:
-
Ensure that you have the necessary C library dependencies available:
libmemcached
postgresql
node
&npm
for the front-end tools
-
Ensure that you have Python 3.6 or later installed
-
Install pipenv either using a tool like Homebrew (
brew install pipenv
) or usingpip
:$ pip3 install pipenv
-
Let Pipenv create the virtual environment and install all of the packages, including our developer tools:
$ pipenv install --dev
n.b. if
libmemcached
is installed using Homebrew you will need to set the CFLAGS long enough to build it:$ CFLAGS=$(pkg-config --cflags libmemcached) LDFLAGS=$(pkg-config --libs libmemcached) pipenv install --dev
Once it has been installed you will not need to repeat this process unless you upgrade the version of libmemcached or Python installed on your system.
-
Configure the Django settings module in the
.env
file which Pipenv will use to automatically populate the environment for every command it runs:$ echo DJANGO_SETTINGS_MODULE="concordia.settings_dev" >> .env
You can use this to set any other values you want to customize, such as
POSTGRESQL_PW
orPOSTGRESQL_HOST
.
-
Apply any database migrations:
$ pipenv run ./manage.py migrate
-
Start the development server:
$ pipenv run ./manage.py runserver
Once the database, redis service, importer and the application are running, you're ready to import data. First, create a Django admin user and log in as that user. Then, go to the Admin area (under Account) and click "Bulk Import Items". Upload a spreadsheet populated according to the instructions. Once all the import jobs are complete, publish the Campaigns, Projects, Items and Assets that you wish to make available.
To generate a model graph, make sure that you have GraphViz installed (e.g.
brew install graphviz
or apt-get install graphviz
) and use the
django-extensions graph_models
command:
$ dot -Tsvg <(pipenv run ./manage.py graph_models concordia importer) -o concordia.svg
-
Use a package manager such as Yarn or NPM to install our development tools:
$ yarn install --dev $ npm install
-
If you need a list of public-facing URLs for testing, there's a management command which may be helpful:
$ pipenv run ./manage.py print_frontend_test_urls
Automated tools such as aXe are useful for catching low-hanging fruit and regressions. You run aXe against a development server by giving it one or more URLs:
$ yarn run axe --show-errors http://localhost:8000/
$ pipenv run ./manage.py print_frontend_test_urls | xargs yarn run axe --show-errors
The concordia/static/img
directory has a Makefile which will run an
imagemin-based toolchain. Use of other
tools such as ImageOptim may yield
better results at the expensive of portability and is encouraged at least for
comparison purposes.
$ make -C concordia/static/img/