Skip to content

tulibraries/tul_cob

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Library Search

Library Search is a Blacklight project at Temple University Libraries for the discovery of library resources. See https://librarysearch.temple.edu/ for our production site.

The first phase of this project (i.e. TUL "Catalog on Blacklight") focused on search for our catalog records and fulfillment integration with our ILS, Alma. It now also includes discovery for: Primo Central Index article records, Springshare A-Z database records, and library website content.

The following repositories are also critical components for Solr indexing and other integrations in the Library Search:

View performance data on Skylight Coverage Status

Getting started

Install the Application

This only needs to happen the first time.

git clone git@github.com:tulibraries/tul_cob
cd tul_cob
bundle install
cp config/secrets.yml.example config/secrets.yml

We also need to configure the application with our Alma and Primo apikey for development work on the Bento box or User account. Start by copying the example alma and bento config files.

cp config/alma.yml.example config/alma.yml
cp config/bento.yml.example config/bento.yml

Then edit them adding in the API keys for our application specified in our Ex Libris Developer Network.

bundle exec rails db:migrate

Start the Application

We need to run two commands in separate terminal windows in order to start the application.

  • In the first terminal window, start solr with run
bundle exec rake server

Platform Considerations

If building a docker image on m1/arm64 chip set PLATFORM env to PLATFORM=arm64 so that docker pulls an arm64 image compatible with your system.

Start the Application with some sample data for Development

You can also have it ingest a few thousand sample records by setting the DO_INGEST environment variable to yes. For example:

DO_INGEST="yes" bundle exec rake server

Start the Application using Docker as an alternative

If Docker is available, we defined a Makefile with many useful commands.

  • To start the dockerized app, run make up
  • To stop the dockerized app, run make down
  • To restart the app, run make restart
  • To enter into the app container, run make tty-app
  • To enter into the solr container, run make tty-solr
  • To run the linter, run make lint
  • To run the Ruby tests, run make test
    • Some tests require chromium driver to be installed on system.
      • On macs, run: brew install chromiumdriver
  • To run Javascript tests, run make test-js
  • To load sample data, run DO_INGEST=yes make up or make load-data
  • To reload solr configs, run make reload-configs
  • To attatch to the running app container (good for debugging) make attach
  • To build prod image: make build ASSETS_PRECOMPILE=yes PLATFORM=arm64 BUILD_IMAGE=ruby:3.1.0-alpine
    • ASSETS_PRECOMPILE=no by default
    • PLATFORM=x86_64 by default
    • BASE_IMAGE=harbor.k8s.temple.edu/library/ruby:3.1.0-alpine by default
  • To deploy prod image: make deploy VERSION=x VERSION=latest by default
  • To run security check on image: make secure depends on trivy (brew install aquasecurity/trivy/trivy)
  • To run a shell with image: make shell

Preparing Alma Data

For the marcxml sample data that has been generated by Alma and exported by FTP, it needs to be processed before committing it to the sample_data folder:

./bin/massage.sh sample_data/alma_bibs.xml

Ingest the sample Alma data with Traject

Now you are ready to ingest:

The simplest way to ingest a marc data file into your local solr is with the ingest rake task. Called with no parameters, it will ingest the data at sample_data/alma_bibs.xml

bundle exec rake ingest

You can also pass in their path to a separate file you would like to ingest as a parameter

bundle exec rake ingest[/some/other/path.xml]

If you need to ingest a file multiple times locally an not have it rejected by SOLR do to update_date you can set SOLR_DISABLE_UPDATE_DATE_CHECK=yes:

SOLR_DISABLE_UPDATE_DATE_CHECK=yes rake ingest[spec/fixtures/purchase_online_bibs.xml]

Under the hood, that command uses traject, with hard coded defaults. If you need to override a default to ingest your data, You can call traject directly:

bundle exec traject -s solr.url=http://somehere/solr -c lib/traject/indexer_config.rb sample_data/alma_bibs.xml

If using docker, then ingest using docker-compose exec app traject -c app/models/traject_indexer.rb sample_data/alma_bibs.xml.

Ingesting URLs

Additionally, you can now use bin/ingest.rb. This is a ruby executable that works on both files and URLs. So now, if you want to quickly ingest a marc xml record from production, you can run something like:

bin/ingest.rb http://example.com/catalog/foo.xml

Ingest AZ Database data

AZ Database fixture data is loaded automatically when you run bundle exec rake tul_cob:solr:load_fixtures. If you want to ingest a single file or URL, use bundle exec cob_az_index ingest $path_to_file_or_url.

Note: If you make an update to cob_az_index, you will need to run bundle update cob_az_index locally.

Ingest web content data

Web content fixture data is loaded automatically when you run bundle exec rake tul_cob:solr:load_fixtures. If you want to ingest a single file or URL, use bundle exec cob_web_index ingest $path_to_file_or_url.

Note: If you make an update to cob_web_index, you will need to run bundle update cob_web_index locally.

Importing from Alma

In order to import from Alma directly execute the following Rake tasks. Harvest may be supplied with an optional date/time ranges in ISO8901 format and enclosed in brackets. You may provide from and/or tao date/times. You may not provide only a to date/time

bundle exec rake tul_cob:oai:harvest[from,to]
bundle exec rake tul_cob:oai:conform_all
bundle exec rake tul_cob:oai:ingest_all

Running the Tests

bundle exec rake ci will start solr, clean out all solr records, ingest the test records, and run your test suite.

bundle exec rake rspec will start your solr and run your test suite, assuming you already have the test records in your test solr.

The rake rspec rake task can also take any rspec command line parameters, for example to use a seed to determine order , you can run:

bundle exec rake rspec["--seed=12345"]

Relevance Tests

Running Lib Guides relevance tests

Do to the the fact that we are effectively testing an outside service for the LibGuides relevance tests, we do not run these tests on the CI. To run locally export appropriate values for the $LIB_GUIDES_API_KEY $LIB_GUIDES_SITE_ID environment variables and point $SOLR_URL to the production solr.

Then run make test-libguides-relevance

Ingest LibGuide AZ documents

Locally you will need to add 'az-database' core to solr (handled automatically for docker/libqa/production)

Ingest AZ database documents by running

./bin/libguide_cache.rb
./bin/ingest-libguides.sh