Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds ADR for historical analytics approach #47

Merged
merged 1 commit into from
Jun 21, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,177 @@
# 5. Use multiple minimal historical analytics models

Date: 2024-06-20

## Status

Accepted

## Context

Knowing how TACOS is doing over time will be important to ensure we can understand our progress in this strategic endeavor.

### Potential ways to approach this problem

#### Monthly checks of all data against all current algorithms

While our data volume is low, we can likely just create some tasks that will run the entire backlog of terms against each of our detection algorithms and record the total number of matches each returns and store those results in the database.

This might result in data that looks something like:

| | DOI | ISSN | ISBN | PMID | Journal Title | Staff categorized term | No matches |
|---|---|---|---|---|---|---|---|
| June 2024 | 10 | 20 | 3 | 2 | - | - | 6,000 |
| July 2024 | 20 | 30 | 7 | 5 | - | - | 15,000 |
| Aug 2024 | 45 | 40 | 11 | 12 | 12,000 | - | 9,000 |
| Sept 2024 | 60 | 70 | 21 | 33 | 22,000 | 100 | 15,000 |
| ...etc | | | | | | | |

As the table shows checks every term with every current algorithm, each month the numbers generally increase as new terms are searched for. This also has the benefit of looking at how new algorithms would have behaved with older data, such as when "Journal Title" was introduced the number of "No matches" dropped significantly in this hypothetical scenario.

##### Class diagram for Total Matches By Algorithm

```mermaid
classDiagram
class TotalMatchesByAlgorithm
TotalMatchesByAlgorithm: +Integer id
TotalMatchesByAlgorithm: +Date month
TotalMatchesByAlgorithm: +Integer doi
TotalMatchesByAlgorithm: +Integer issn
TotalMatchesByAlgorithm: +Integer isbn
TotalMatchesByAlgorithm: +Integer pmid
TotalMatchesByAlgorithm: +Integer journal_title
TotalMatchesByAlgorithm: +Integer staff_categorized
TotalMatchesByAlgorithm: +Integer no_matches
```
Comment on lines +33 to +45
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few questions about this...

  1. I'm correct that both the TotalMatchesByAlgorithm and MonthlyMatchesByAlgorithm have identical structures, so the difference is entirely in the code that populates them?
  2. At the moment, we have application code that would populate the doi, issn, ibn, and pmid columns - but the journal_title and staff_categorized columns feel unneeded at this moment. Would we create them now because we've mostly decided that we want to have them? Or would we wait until we actually can put something in them before creating the column?
  3. More philosophically, the care and feeding of this application would require us to keep this data model in sync with the detectors that we've implemented - so it feels like this would be a good candidate for inclusion in the PR template for this app?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding question 2, my preference would be to add those columns when we're ready to use them. However, I like including them in the class diagram as an indication of future state.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. yes
  2. yes, we'd only add fields as we need them not predict what we might need later
  3. good point. Adding a "if this PR adds a new algorithm please make sure analytics account for it" kinda thing makes sense.


It is possible a single searchterm shows up in counts for multiple rows. For instance a citation may include both a DOI and a Journal Title... and potentially in the future a "citation" match.

#### Monthly checks of current data against all current algorithms

Similar to the previous approach, this approach allows us to look at how our algorithms are doing.

A table might looks something like:

| | DOI | ISSN | ISBN | PMID | Journal Title | Staff categorized term | No matches |
|---|---|---|---|---|---|---|---|
| June 2024 | 10 | 20 | 3 | 2 | - | - | 6,000 |
| July 2024 | 10 | 10 | 4 | 3 | - | - | 9,000 |
| Aug 2024 | 25 | 10 | 4 | 7 | 5,000 | - | 1,000 |
| Sept 2024 | 15 | 30 | 10 | 11 | 5,000 | 100 | 900 |
| ...etc | | | | | | | |

In this case, each row is only focused on the Terms that had SearchEvents during the month being analyzed so we don't have an ever growing number of terms to check each month. This comes at the cost of not fully understanding how new or adjusted algorithms would perform with historical search terms.

##### Class diagram for Monthly Matches by Algorithm

```mermaid
classDiagram
class MonthlyMatchesByAlgorithm
MonthlyMatchesByAlgorithm: +Integer id
MonthlyMatchesByAlgorithm: +Date month
MonthlyMatchesByAlgorithm: +Integer doi
MonthlyMatchesByAlgorithm: +Integer issn
MonthlyMatchesByAlgorithm: +Integer isbn
MonthlyMatchesByAlgorithm: +Integer pmid
MonthlyMatchesByAlgorithm: +Integer journal_title
MonthlyMatchesByAlgorithm: +Integer staff_categorized
MonthlyMatchesByAlgorithm: +Integer no_matches
```

This approach is likely complementary to the "count all matches" approach and as both store minimal data it is possible we should do both. It is likely a single pass through all Terms could populate both tables if a lookup were done to see if the Term had a SearchEvent. Care would need to be taken to not create an N+1 SQL query if doing this in a single job but it should be possible.

#### Detections table with verbose data about matches

A more verbose way to record detections from our algorithms might be a log-like table that ties to SearchEvents.

##### Class diagram for verbose Detections Table

```mermaid
classDiagram

Term --< SearchEvent : has many
SearchEvent --< SearchDetections : has many

class Term
Term: +Integer id
Term: +String phrase

class SearchEvent
SearchEvent: +Intger id
SearchEvent: +Integer term_id
SearchEvent: +Timestamp created_at

class SearchDetections
SearchDetections: +Integer id
SearchDetections: +Integer searchevent_id
SearchDetections: +JSON DOI NULL
SearchDetections: +JSON ISBN NULL
SearchDetections: +JSON ISSN NULL
SearchDetections: +JSON PMID NULL
SearchDetections: +JSON Hint NULL
SearchDetections: +Timestamp created_at
```

We already have a record of every time we’ve seen a given search for any phrase - adding a table (or even fields in the SearchEvent table) for each detector we build would allow us to have a record of whether each detector fired, and what it returned. Setting a default value of NULL on each field, but storing an empty non-NULL value via code, would allow us to query for failures to detect (which would be stored as {} separately from any search prior to that detector being in place (which would be stored as NULL values).

An approach like this, which doesn’t generate only counts but also the details of what is detected / returned, would provide greater visibility into application performance - but at the cost of needing to be recorded at query time, and not as a batch retroactively.

This stores a lot more data and it isn't clear what we would use it for. This approach should likely be implemented in the future if we find we have a specific question that can only be solved with this level of detail. We would lose historical data if we wait to implement this, but that risk is outweighed by avoiding storing a bunch of data without a clear purpose.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alternatively, we could store the data for now and drop the table if it becomes clear that there's no purpose for it. The downside of that (aside from storing a lot of data in the interim) is knowing when to decide whether it's useful or not.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I concur - knowing when to get rid of that data feels like a thornier process than just not collecting it yet until we have a more clear understanding of how we'll handle it and when to delete it.


#### Detections table with only match status stored

This approach takes the pros of the detections table presented above, and eliminates storing extra JSON responses and moves to just storing a simple value:

- 0 = no match
- 1 = match
- null = did not run (will be set by default when we introduce algorithms after the SearchEvent is created. We'd not set this explicitly)

##### Class diagram for Simplified Detections Table

```mermaid
classDiagram

Term --< SearchEvent : has many
SearchEvent --< SearchDetections : has many

class Term
Term: +Integer id
Term: +String phrase

class SearchEvent
SearchEvent: +Intger id
SearchEvent: +Integer term_id
SearchEvent: +Timestamp created_at

class SearchDetections
SearchDetections: +Integer id
SearchDetections: +Integer searchevent_id
SearchDetections: +Integer DOI NULL
SearchDetections: +Integer ISBN NULL
SearchDetections: +Integer ISSN NULL
SearchDetections: +Integer PMID NULL
SearchDetections: +Integer Hint NULL
SearchDetections: +Timestamp created_at
```
Comment on lines +146 to +155
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm continuing to wonder about the Detections table structure, and wanted to mention two alternatives.

First, a more normalized approach. The need to create a column for every detector is picking at my data normalization instincts, and I'm tempted to float an arrangement like this:

classDiagram

  Term --< SearchEvent : has many
  LinkSearchDetectors --< SearchEvent : has many
  LinkSearchDetectors --< Detectors : has many

  class Term
    Term: +Integer id
    Term: +String phrase

  class SearchEvent
    SearchEvent: +Integer id
    SearchEvent: +Integer term_id
    SearchEvent: +Timestamp created_at
  
  class LinkSearchDetectors
    LinkSearchDetectors: +Integer id
    LinkSearchDetectors: +Integer searchevent_id
    LinkSearchDetectors: +Integer detector_id
    LinkSearchDetectors: +Integer result

  class Detectors
    Detectors: +Integer id
    Detectors: +String name
Loading

However, I don't think we should do this. While this design is more normalized, it would make maintenance of the application harder. Creating a new detector would no longer be a purely-code exercise, accomplishable with a PR and related data migration. We would still need the PR for code change, and a migration for any data model changes, but then we would also need to create new records in the Detectors table after the migrations are complete.


The second option would be to combine the SearchEvent and SearchDetections table:

classDiagram

  Term --< SearchEvent : has many

  class Term
    Term: +Integer id
    Term: +String phrase

  class SearchEvent
    SearchEvent: +Intger id
    SearchEvent: +Integer term_id
    SearchEvent: +Integer DOI NULL
    SearchEvent: +Integer ISBN NULL
    SearchEvent: +Integer ISSN NULL
    SearchEvent: +Integer PMID NULL
    SearchEvent: +Integer Hint NULL
    SearchEvent: +Timestamp created_at
Loading

I don't think this makes sense either. While it does work for naturally-received repeat searches over time (every search event gets its own entry, which will trigger its own detections), this model doesn't allow us to retroactively run the sorts of monthly reports that you're envisioning - not without creating extraneous SearchEvent records for every monthly report, which will make counts inaccurate.

What I'm not sure about, though, is how to handle commonly-seen searches during these monthly reports. These reports will be run once for every Term, not for every SearchEvent - because we don't need to run web of science 72 times in every monthly report. We need to run it once. Every monthly report would end up generating a new SearchDetections record, and I'm assuming that repeat runs would be latched onto the first received SearchEvent record?

The data might look like this:

Term ID Term phrase
1 web of science
2 some custom term only seen once
SearchEvent ID Term ID Created at
1 1 June 14, 2024 8:03 am
2 2 June 21, 2024 10:14 am
3 1 July 1, 2024 4:17 pm
4 4 July 1, 2024 8:45 pm
SearchDetection ID SearchEvent ID DOI ISBN ISSN PMID Hint Created at
1 1 0 0 0 0 0 June 14, 2024 8:03 am
2 2 0 0 0 0 0 June 21, 2024 10:14 am
3 3 0 0 0 0 0 July 1, 2024 4:17 pm
4 4 0 0 0 0 1 July 4, 2024 8:45 pm
5 1 0 0 0 0 1 August 1, 2024 12:00 am
6 2 0 0 0 0 0 August 1, 2024 12:00 am
7 1 0 0 0 0 1 September 1, 2024 12:00 am
8 2 0 0 0 0 0 September 1, 2024 12:00 am

The SearchDetections records here indicate four naturally received search events and their detections (rows 1-4 in the detections table). There are also two rounds of monthly batch reports (rows 5-8), including every term, hanging off the first observation of each search term.

This feels workable to me, but I want to make sure I'm following.


## Decision

We'll use a combination of approaches.

Specifically, we'll implement:

- Monthly checks of all data against all current algorithms
- Monthly checks of current data against all current algorithms
- Detections table with only match status stored

Neither of the Detections table approaches provide context as to how our new algorithms would do with older Terms.

The `Detections table with only match status stored` option in theory could simulate the `Monthly checks of current data against all current algorithms` but having these in an aggregate table will be minimal work and allow us to not lose data if for some reason we choose to purge the SearchEvent data after a certain time period of usefulness.

## Consequences

By using all three of the fairly simple versions, we should get data that helps us understand how our algorithms are doing.

With the two monthly aggregations, we see both how our algorithms do with all data the system has ever seen and how they did with the most current data. Comparing those will be useful in understanding whether we are algorithms are only useful for older data or if they continue to be useful for current data.

We get an additional perspective by using the Detections table in that we can look at how a specific Term does over time. This ability to drill down and look at a Term over time should give us insight into how our system is working in a way the aggregate data can only hint at. By using the simplified version of Detections table, we do lose out on know exactly what we responded with, but as we don't have a clear use case for that now we should accept that risk and if it becomes essential in the future we can adjust what we store for data going forward.
Loading