diff --git a/docs/docs/en/data-storage.md b/docs/docs/en/data-storage.md index 2c2c8621..7131f439 100644 --- a/docs/docs/en/data-storage.md +++ b/docs/docs/en/data-storage.md @@ -31,8 +31,8 @@ For example: --- sr-due: 2024-07-01 - sr-interval: 3 sr-ease: 269 + sr-interval: 3 --- ### Single Scheduling File diff --git a/docs/vaults/en/Computing/AWS/DynamoDB/Amazon DynamoDB.md b/docs/vaults/en/Computing/AWS/DynamoDB/Amazon DynamoDB.md index 1ca6a811..f2a472ca 100644 --- a/docs/vaults/en/Computing/AWS/DynamoDB/Amazon DynamoDB.md +++ b/docs/vaults/en/Computing/AWS/DynamoDB/Amazon DynamoDB.md @@ -1,9 +1,9 @@ --- sr-due: 2024-08-03 -sr-interval: 4 sr-ease: 270 +sr-interval: 4 --- #review -Hi guys in this lesson, we're going to go over some of the fundamentals of Amazon DynamoDB. So what is DynamoDB. Well, it's a fully managed NoSQL database service. So it's more of an unstructured database compared to SQL which has more of a rigid structure. It's a key value store and also a document store. It's non-relational, key value. It's fully serverless and you get push button scaling. So that means it's very easy to adjust how your database scales by adjusting the throughput, which will look at in this section. So you have a DynamoDB table and it's essentially scaling horizontally as you give it more throughput. And on the backend that's happening across various partitions in the Amazon data center. So data is stored in partitions and they are replicated across multiple AZs within a single region. DynamoDB provides low latency. So it's in the range of milliseconds. If you need lower latency, like microsecond and these are keywords to look for in exam questions then you would need to use DynamoDB accelerator which we'll also cover in this section. All data gets stored on SSD storage. So it's solid state drives which are high performance. Data gets replicated as we mentioned in the previous slide across multiple AZs within a region. And there's a feature called Global Tables and that will synchronize your tables across regions if you need to have replication across regions. Maybe you're running an application in another region or you might be using it in a DR HA set up. Now, let's look at some of the features of DynamoDB. So as I mentioned, it is fully serverless, it's fully managed, and it's fault tolerant. It's highly available with four nines availability and five nines if you use Global Tables. It's a NoSQL type of database with a name/value structure, has a flexible schemer, which is good for when your data is not well structured or it's unpredictable. Scaling is horizontal by adjusting the throughput. And then AWS takes care of how it actually scales your database across partitions on the backend. And you can also use Auto Scaling. DynamoDB streams is a feature that allows you to capture a time-ordered sequence of item level modifications in a table and it stores that information for up to 24 hours. DynamoDB accelerator is a fully managed in-memory cache for DynamoDB. So that reduces the latency to microseconds. And that runs on EC2 instances. There are various transaction options, including strongly consistent and eventually consistent reads and support for ACID transactions. We'll go into more detail about that. For backup, you get point in time recovery down to the second in the last 35 days and also on-demand backup and restore. And global tables is a fully managed multi-region, multi-master solution. So with global tables you can make changes in each of the regions your table is replicated to. So let's look at the core components of DynamoDB. Firstly we have a table. So everything you see here would constitute the contents of a table. Then we have items. An item is essentially a row in the DynamoDB table. And then we have attributes. The attribute is the information that's associated with each of the items in the database. For the exam it's worth understanding some of the API actions. Now all operations are categorized as control plane or data plane. So let's have a look at some control plane. API actions. For instance create table to create a new table, describe table to get information about an existing table, and list tables will return the names of all your tables in a list. Update table is where you're able to modify the settings of a table or its indexes. And then delete table will delete the table and all the contents. Data plane API actions can be performed using PartiQL, which is SQL compatible or classic DynamoDB, create, read, update, delete or CRUD APIs. So let's have a look at some examples. You've got PutItem to write a single item into a table. BatchWriteItem so you can actually write up to 25 items to a table. So that's more efficient. And then GetItem which will retrieve a single item from a table. BatchGetItem retrieves up to 100 items from one or more tables. So the batch options give you more efficiency when you have large reads or large writes. Update item will modify one or more attributes in an item. And we have delete item to delete a single item from a table. Now let's have a look at some of the supported data types. DynamoDB does support many data types and they're categorized as follows. We've got scalar. A scalar type can represent exactly one value. And those are number, string, binary, boolean, and null. We've then got document types. Those are list and map. And we've got set types. A set type represents multiple scalar values and those are set, number set, and binary set. Now there are a couple of classes of table we can use. We've got the standard which is the default and it's recommended for most workloads. We've then got DynamoDB Standard Infrequent Access or DynamoDB Standard IA. This is lower cost storage for tables that store infrequently accessed data. For example, application logs, old social media posts, E-commerce order history, or past gaming achievements. Let's move on to access control. Access control is managed using IAM. So its identity-based policies that we're using to control access to DynamoDB. You can attach a permissions policy to a user or a group in your account and you can apply a permissions policy to a role. And you can grant cross-account permissions through that option as well. DynamoDB does not support resource-based policies. you can use a special IAM condition to restrict user access to only their own records. The primary DynamoDB resources are tables but it also supports additional resource types, indexes, and streams. You can create indexes and streams only in the context of an existing table. So there are several resources of the actual DynamoDB table. The resources and sub resources will have unique ARNs of their own. And we can see in the table here what the format is. So we've got a table, an index, and a stream and you can see the format of the ARN. Of course where it's highlighted in red that's that's where you would actually replace these values with your region, your account ID, and then your table name or your stream label. Now let's have a look at a couple of example policies. The following example policy grants permissions for one DynamoDB action. That's DynamoDB list tables. So we can see the policy here. The effect is allow. The action is DynamoDB list tables. The resource in this case is *, so any DynamoDB table. The resource is * so that means any table. So it's not specifying a specific ARN. Let's look at another policy. This one grants permissions for three DynamoDB actions. And we can see those are DynamoDB describe table, query, and scan and in this case we're actually specifying the ARN of one individual table. That table is named Books. So the actions that are allowed through this policy will only be allowed on that specific table. So that's it for some of the core fundamentals of DynamoDB. We've got lots more to get on with, and I'll see you in the next lesson. \ No newline at end of file +Hi guys in this lesson, we're going to go over some of the fundamentals of Amazon DynamoDB. So what is DynamoDB. Well, it's a fully managed NoSQL database service. So it's more of an unstructured database compared to SQL which has more of a rigid structure. It's a key value store and also a document store. It's non-relational, key value. It's fully serverless and you get push button scaling. So that means it's very easy to adjust how your database scales by adjusting the throughput, which will look at in this section. So you have a DynamoDB table and it's essentially scaling horizontally as you give it more throughput. And on the backend that's happening across various partitions in the Amazon data center. So data is stored in partitions and they are replicated across multiple AZs within a single region. DynamoDB provides low latency. So it's in the range of milliseconds. If you need lower latency, like microsecond and these are keywords to look for in exam questions then you would need to use DynamoDB accelerator which we'll also cover in this section. All data gets stored on SSD storage. So it's solid state drives which are high performance. Data gets replicated as we mentioned in the previous slide across multiple AZs within a region. And there's a feature called Global Tables and that will synchronize your tables across regions if you need to have replication across regions. Maybe you're running an application in another region or you might be using it in a DR HA set up. Now, let's look at some of the features of DynamoDB. So as I mentioned, it is fully serverless, it's fully managed, and it's fault tolerant. It's highly available with four nines availability and five nines if you use Global Tables. It's a NoSQL type of database with a name/value structure, has a flexible schemer, which is good for when your data is not well structured or it's unpredictable. Scaling is horizontal by adjusting the throughput. And then AWS takes care of how it actually scales your database across partitions on the backend. And you can also use Auto Scaling. DynamoDB streams is a feature that allows you to capture a time-ordered sequence of item level modifications in a table and it stores that information for up to 24 hours. DynamoDB accelerator is a fully managed in-memory cache for DynamoDB. So that reduces the latency to microseconds. And that runs on EC2 instances. There are various transaction options, including strongly consistent and eventually consistent reads and support for ACID transactions. We'll go into more detail about that. For backup, you get point in time recovery down to the second in the last 35 days and also on-demand backup and restore. And global tables is a fully managed multi-region, multi-master solution. So with global tables you can make changes in each of the regions your table is replicated to. So let's look at the core components of DynamoDB. Firstly we have a table. So everything you see here would constitute the contents of a table. Then we have items. An item is essentially a row in the DynamoDB table. And then we have attributes. The attribute is the information that's associated with each of the items in the database. For the exam it's worth understanding some of the API actions. Now all operations are categorized as control plane or data plane. So let's have a look at some control plane. API actions. For instance create table to create a new table, describe table to get information about an existing table, and list tables will return the names of all your tables in a list. Update table is where you're able to modify the settings of a table or its indexes. And then delete table will delete the table and all the contents. Data plane API actions can be performed using PartiQL, which is SQL compatible or classic DynamoDB, create, read, update, delete or CRUD APIs. So let's have a look at some examples. You've got PutItem to write a single item into a table. BatchWriteItem so you can actually write up to 25 items to a table. So that's more efficient. And then GetItem which will retrieve a single item from a table. BatchGetItem retrieves up to 100 items from one or more tables. So the batch options give you more efficiency when you have large reads or large writes. Update item will modify one or more attributes in an item. And we have delete item to delete a single item from a table. Now let's have a look at some of the supported data types. DynamoDB does support many data types and they're categorized as follows. We've got scalar. A scalar type can represent exactly one value. And those are number, string, binary, boolean, and null. We've then got document types. Those are list and map. And we've got set types. A set type represents multiple scalar values and those are set, number set, and binary set. Now there are a couple of classes of table we can use. We've got the standard which is the default and it's recommended for most workloads. We've then got DynamoDB Standard Infrequent Access or DynamoDB Standard IA. This is lower cost storage for tables that store infrequently accessed data. For example, application logs, old social media posts, E-commerce order history, or past gaming achievements. Let's move on to access control. Access control is managed using IAM. So its identity-based policies that we're using to control access to DynamoDB. You can attach a permissions policy to a user or a group in your account and you can apply a permissions policy to a role. And you can grant cross-account permissions through that option as well. DynamoDB does not support resource-based policies. you can use a special IAM condition to restrict user access to only their own records. The primary DynamoDB resources are tables but it also supports additional resource types, indexes, and streams. You can create indexes and streams only in the context of an existing table. So there are several resources of the actual DynamoDB table. The resources and sub resources will have unique ARNs of their own. And we can see in the table here what the format is. So we've got a table, an index, and a stream and you can see the format of the ARN. Of course where it's highlighted in red that's that's where you would actually replace these values with your region, your account ID, and then your table name or your stream label. Now let's have a look at a couple of example policies. The following example policy grants permissions for one DynamoDB action. That's DynamoDB list tables. So we can see the policy here. The effect is allow. The action is DynamoDB list tables. The resource in this case is *, so any DynamoDB table. The resource is * so that means any table. So it's not specifying a specific ARN. Let's look at another policy. This one grants permissions for three DynamoDB actions. And we can see those are DynamoDB describe table, query, and scan and in this case we're actually specifying the ARN of one individual table. That table is named Books. So the actions that are allowed through this policy will only be allowed on that specific table. So that's it for some of the core fundamentals of DynamoDB. We've got lots more to get on with, and I'll see you in the next lesson. diff --git a/docs/vaults/en/Computing/AWS/DynamoDB/DynamoDB LSI and GSI.md b/docs/vaults/en/Computing/AWS/DynamoDB/DynamoDB LSI and GSI.md index f8f248a8..81342067 100644 --- a/docs/vaults/en/Computing/AWS/DynamoDB/DynamoDB LSI and GSI.md +++ b/docs/vaults/en/Computing/AWS/DynamoDB/DynamoDB LSI and GSI.md @@ -1,36 +1,36 @@ --- sr-due: 2024-08-01 -sr-interval: 3 sr-ease: 269 +sr-interval: 3 --- #course/aws/developer-associate #review -In this lesson, I'm going to cover what are called [[Local Secondary Index (LSI)|Local Secondary Indexes]] and [[Global Secondary Index (GSI)|Global Secondary Indexes]], LSIs and GSIs. +In this lesson, I'm going to cover what are called [[Local Secondary Index (LSI)|Local Secondary Indexes]] and [[Global Secondary Index (GSI)|Global Secondary Indexes]], LSIs and GSIs. # Local Secondary Indexes -So firstly let's look at the LSIs. These provide an alternative sort key to use for scans and queries. +So firstly let's look at the LSIs. These provide an alternative sort key to use for scans and queries. -You can create up to five LSIs per table. And they must be created at the creation time for your DynamoDB table. You cannot add, remove, or modify them later on. +You can create up to five LSIs per table. And they must be created at the creation time for your DynamoDB table. You cannot add, remove, or modify them later on. -The LSI has the same partition key as your original table, but it has a different sort key. So this helps you being able to perform scans and queries that you can't do in your primary table. Essentially it gives you a different view of your data organized by the alternative sort key. +The LSI has the same partition key as your original table, but it has a different sort key. So this helps you being able to perform scans and queries that you can't do in your primary table. Essentially it gives you a different view of your data organized by the alternative sort key. -Any queries based on the sort key are much faster using the index than the main table. So let's have a look at the diagram. We have our primary table. And I'm not showing the attributes. I'm just showing the partition key and the sort key here. +Any queries based on the sort key are much faster using the index than the main table. So let's have a look at the diagram. We have our primary table. And I'm not showing the attributes. I'm just showing the partition key and the sort key here. -So we have client ID and then we have created. Now what we might want to do is create an index from our primary table. And this LSI has a partition key and the partition key is always going to be the same, so it's client ID. But in this case the sort key is the SKU. +So we have client ID and then we have created. Now what we might want to do is create an index from our primary table. And this LSI has a partition key and the partition key is always going to be the same, so it's client ID. But in this case the sort key is the SKU. ![[Pasted image 20240507170722.png]] -Attributes can be optionally [[Attribute Projection|projected]] and that means that they will be actually put into the LSI as well. So it really depends what exactly you're searching for. So remember the sort key is different on the LSI, but the partition key is always the same. +Attributes can be optionally [[Attribute Projection|projected]] and that means that they will be actually put into the LSI as well. So it really depends what exactly you're searching for. So remember the sort key is different on the LSI, but the partition key is always the same. -Now let's have a look at a couple of examples of querying. So in this example, querying the main table we must use the partition key client ID and the sort key created. But in another example with an LSI, we can query the index for any orders made by a certain user with the SKU because we can have a different sort key than the main table. +Now let's have a look at a couple of examples of querying. So in this example, querying the main table we must use the partition key client ID and the sort key created. But in another example with an LSI, we can query the index for any orders made by a certain user with the SKU because we can have a different sort key than the main table. # GSI -Next we have the Global Secondary Index. This is used to speed up queries on non-key attributes. So those are not part of the partition key. You can create these when you create your table or at any time. And you can specify a different partition key as well as a different sort key. It gives a completely different view of the data. And it speeds up any queries relating to this alternative partition key and sort key. +Next we have the Global Secondary Index. This is used to speed up queries on non-key attributes. So those are not part of the partition key. You can create these when you create your table or at any time. And you can specify a different partition key as well as a different sort key. It gives a completely different view of the data. And it speeds up any queries relating to this alternative partition key and sort key. -So again, let's have a look at an example. We have a primary table here with a partition key or client ID. And a sort key is created. The index is created again from the primary table. And with the GSI, in this case we've got a completely different partition key and a different sort key. And again optionally, we can project attributes into the GSI if we wish to. So let's look at an example. And in this one we can query the index for orders of the SKU where the quantity is greater than one. +So again, let's have a look at an example. We have a primary table here with a partition key or client ID. And a sort key is created. The index is created again from the primary table. And with the GSI, in this case we've got a completely different partition key and a different sort key. And again optionally, we can project attributes into the GSI if we wish to. So let's look at an example. And in this one we can query the index for orders of the SKU where the quantity is greater than one. And that's because we have a different partition key and a different sort key. diff --git a/docs/vaults/en/Math/Ellipse/Ellipse Equation.md b/docs/vaults/en/Math/Ellipse/Ellipse Equation.md index 685b3748..e3aec2c9 100644 --- a/docs/vaults/en/Math/Ellipse/Ellipse Equation.md +++ b/docs/vaults/en/Math/Ellipse/Ellipse Equation.md @@ -1,7 +1,7 @@ --- sr-due: 2024-08-31 -sr-interval: 3 sr-ease: 266 +sr-interval: 3 --- #review @@ -19,11 +19,10 @@ Where ![[Pasted image 20230828144739.png]] ---- -#flashcards/math +#flashcards/math # Questions Given lengths a and b (half the length of the major and minor axes respectively), and a coordinate system with origin half way between an ellipse's foci, what is the ellipse's equation ? $\large \frac {x ^2} {a ^2}+\frac {y ^2} {b ^2}=1$ - diff --git a/docs/vaults/en/Physics/Electric Field/Electric Field Simulation.md b/docs/vaults/en/Physics/Electric Field/Electric Field Simulation.md index e26525ad..cac3ece2 100644 --- a/docs/vaults/en/Physics/Electric Field/Electric Field Simulation.md +++ b/docs/vaults/en/Physics/Electric Field/Electric Field Simulation.md @@ -1,10 +1,9 @@ --- sr-due: 2024-08-31 -sr-interval: 1 sr-ease: 230 +sr-interval: 1 --- #review [[charges-and-fields_en.html]] - diff --git a/src/constants.ts b/src/constants.ts index 136d3794..0d83dae8 100644 --- a/src/constants.ts +++ b/src/constants.ts @@ -1,8 +1,10 @@ // To cater for both LF and CR-LF line ending styles, "\r?\n" is used to match the newline character sequence // https://github.com/st3v3nmw/obsidian-spaced-repetition/issues/776 -export const SCHEDULING_INFO_REGEX = - /^---\r?\n((?:.*\r?\n)*)sr-due: (.+)\r?\nsr-interval: (\d+)\r?\nsr-ease: (\d+)\r?\n((?:.*\r?\n)?)---/; -export const YAML_FRONT_MATTER_REGEX = /^---\r?\n((?:.*\r?\n)*?)---/; +export const SCHEDULING_INFO_DUE_REGEX = /^(---(?:.*\r?\n)*sr-due: )(.+)((?:.*\r?\n)*---)/; +export const SCHEDULING_INFO_EASE_REGEX = /^(---(?:.*\r?\n)*sr-ease: )(.+)((?:.*\r?\n)*---)/; +export const SCHEDULING_INFO_INTERVAL_REGEX = + /^(---(?:.*\r?\n)*sr-interval: )(.+)((?:.*\r?\n)*---)/; +export const YAML_FRONT_MATTER_REGEX = /^---\r?\n((?:.*\r?\n)*)---/; export const MULTI_SCHEDULING_EXTRACTOR = /!([\d-]+),(\d+),(\d+)/gm; export const LEGACY_SCHEDULING_EXTRACTOR = //gm; diff --git a/src/data-store-algorithm/data-store-in-note-algorithm-osr.ts b/src/data-store-algorithm/data-store-in-note-algorithm-osr.ts index 6fe4b67b..57eac1f6 100644 --- a/src/data-store-algorithm/data-store-in-note-algorithm-osr.ts +++ b/src/data-store-algorithm/data-store-in-note-algorithm-osr.ts @@ -1,12 +1,13 @@ -import { Moment } from "moment"; -import moment from "moment"; +import moment, { Moment } from "moment"; import { RepItemScheduleInfo } from "src/algorithms/base/rep-item-schedule-info"; import { RepItemScheduleInfoOsr } from "src/algorithms/osr/rep-item-schedule-info-osr"; import { Card } from "src/card"; import { ALLOWED_DATE_FORMATS, - SCHEDULING_INFO_REGEX, + SCHEDULING_INFO_DUE_REGEX, + SCHEDULING_INFO_EASE_REGEX, + SCHEDULING_INFO_INTERVAL_REGEX, SR_HTML_COMMENT_BEGIN, SR_HTML_COMMENT_END, YAML_FRONT_MATTER_REGEX, @@ -35,12 +36,12 @@ export class DataStoreInNoteAlgorithmOsr implements IDataStoreAlgorithm { if ( frontmatter && frontmatter.has("sr-due") && - frontmatter.has("sr-interval") && - frontmatter.has("sr-ease") + frontmatter.has("sr-ease") && + frontmatter.has("sr-interval") ) { const dueDate: Moment = moment(frontmatter.get("sr-due"), ALLOWED_DATE_FORMATS); - const interval: number = parseFloat(frontmatter.get("sr-interval")); const ease: number = parseFloat(frontmatter.get("sr-ease")); + const interval: number = parseFloat(frontmatter.get("sr-interval")); result = new RepItemScheduleInfoOsr(dueDate, interval, ease); } return result; @@ -51,17 +52,30 @@ export class DataStoreInNoteAlgorithmOsr implements IDataStoreAlgorithm { const schedInfo: RepItemScheduleInfoOsr = repItemScheduleInfo as RepItemScheduleInfoOsr; const dueString: string = formatDateYYYYMMDD(schedInfo.dueDate); - const interval: number = schedInfo.interval; const ease: number = schedInfo.latestEase; + const interval: number = schedInfo.interval; // check if scheduling info exists - if (SCHEDULING_INFO_REGEX.test(fileText)) { - const schedulingInfo = SCHEDULING_INFO_REGEX.exec(fileText); + const hasSchedulingDueString = SCHEDULING_INFO_DUE_REGEX.test(fileText); + const hasSchedulingEase = SCHEDULING_INFO_EASE_REGEX.test(fileText); + const hasSchedulingInterval = SCHEDULING_INFO_INTERVAL_REGEX.test(fileText); + if (hasSchedulingDueString && hasSchedulingEase && hasSchedulingInterval) { + const schedulingDueString = SCHEDULING_INFO_DUE_REGEX.exec(fileText); + fileText = fileText.replace( + SCHEDULING_INFO_DUE_REGEX, + `${schedulingDueString[1]}${dueString}${schedulingDueString[3]}`, + ); + + const schedulingEase = SCHEDULING_INFO_EASE_REGEX.exec(fileText); + fileText = fileText.replace( + SCHEDULING_INFO_EASE_REGEX, + `${schedulingEase[1]}${ease}${schedulingEase[3]}`, + ); + + const schedulingInterval = SCHEDULING_INFO_INTERVAL_REGEX.exec(fileText); fileText = fileText.replace( - SCHEDULING_INFO_REGEX, - `---\n${schedulingInfo[1]}sr-due: ${dueString}\n` + - `sr-interval: ${interval}\nsr-ease: ${ease}\n` + - `${schedulingInfo[5]}---`, + SCHEDULING_INFO_INTERVAL_REGEX, + `${schedulingInterval[1]}${interval}${schedulingInterval[3]}`, ); } else if (YAML_FRONT_MATTER_REGEX.test(fileText)) { // new note with existing YAML front matter @@ -69,12 +83,12 @@ export class DataStoreInNoteAlgorithmOsr implements IDataStoreAlgorithm { fileText = fileText.replace( YAML_FRONT_MATTER_REGEX, `---\n${existingYaml[1]}sr-due: ${dueString}\n` + - `sr-interval: ${interval}\nsr-ease: ${ease}\n---`, + `sr-ease: ${ease}\nsr-interval: ${interval}\n---`, ); } else { fileText = - `---\nsr-due: ${dueString}\nsr-interval: ${interval}\n` + - `sr-ease: ${ease}\n---\n\n${fileText}`; + `---\nsr-due: ${dueString}\nsr-ease: ${ease}\n` + + `sr-interval: ${interval}\n---\n\n${fileText}`; } await note.write(fileText); diff --git a/tests/unit/constants.test.ts b/tests/unit/constants.test.ts index d6dbd604..55f3547e 100644 --- a/tests/unit/constants.test.ts +++ b/tests/unit/constants.test.ts @@ -1,4 +1,9 @@ -import { YAML_FRONT_MATTER_REGEX } from "src/constants"; +import { + SCHEDULING_INFO_DUE_REGEX, + SCHEDULING_INFO_EASE_REGEX, + SCHEDULING_INFO_INTERVAL_REGEX, + YAML_FRONT_MATTER_REGEX, +} from "src/constants"; describe("YAML_FRONT_MATTER_REGEX", () => { function createTestStr1(sep: string): string { @@ -17,3 +22,19 @@ describe("YAML_FRONT_MATTER_REGEX", () => { expect(YAML_FRONT_MATTER_REGEX.test(text)).toEqual(true); }); }); + +describe("SCHEDULING_INFO__REGEX", () => { + const info = "---\nsr-due: 2024-08-10\nsr-interval: 273\nsr-ease: 309\n---"; + + test("Extract due date", async () => { + expect(SCHEDULING_INFO_DUE_REGEX.exec(info)[2]).toEqual("2024-08-10"); + }); + + test("Extract ease", async () => { + expect(SCHEDULING_INFO_EASE_REGEX.exec(info)[2]).toEqual("309"); + }); + + test("Extract interval", async () => { + expect(SCHEDULING_INFO_INTERVAL_REGEX.exec(info)[2]).toEqual("273"); + }); +}); diff --git a/tests/unit/data-store-algorithm/data-store-in-note-algorithm-osr.test.ts b/tests/unit/data-store-algorithm/data-store-in-note-algorithm-osr.test.ts index 84c33b0a..def1a36e 100644 --- a/tests/unit/data-store-algorithm/data-store-in-note-algorithm-osr.test.ts +++ b/tests/unit/data-store-algorithm/data-store-in-note-algorithm-osr.test.ts @@ -31,8 +31,8 @@ A very interesting note const expectedText: string = `--- created: 2024-01-17 sr-due: 2023-10-06 -sr-interval: 25 sr-ease: 263 +sr-interval: 25 --- A very interesting note `; diff --git a/tests/vaults/notes2/Triboelectric Effect.md b/tests/vaults/notes2/Triboelectric Effect.md index 8475afcb..8a1e13d2 100644 --- a/tests/vaults/notes2/Triboelectric Effect.md +++ b/tests/vaults/notes2/Triboelectric Effect.md @@ -1,7 +1,7 @@ --- sr-due: 2025-02-21 -sr-interval: 421 sr-ease: 270 +sr-interval: 421 --- #review diff --git a/tests/vaults/notes4/A.md b/tests/vaults/notes4/A.md index d9f6c3fe..3d8346c2 100644 --- a/tests/vaults/notes4/A.md +++ b/tests/vaults/notes4/A.md @@ -1,7 +1,7 @@ --- sr-due: 2023-09-10 -sr-interval: 4 sr-ease: 270 +sr-interval: 4 --- #review diff --git a/tests/vaults/notes5/A.md b/tests/vaults/notes5/A.md index d9f6c3fe..3d8346c2 100644 --- a/tests/vaults/notes5/A.md +++ b/tests/vaults/notes5/A.md @@ -1,7 +1,7 @@ --- sr-due: 2023-09-10 -sr-interval: 4 sr-ease: 270 +sr-interval: 4 --- #review