Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Dimensional metric reporting #376

Closed
wants to merge 3 commits into from

Conversation

tpitale
Copy link
Contributor

@tpitale tpitale commented Jan 14, 2022

  • Adds NewRelic.report_dimensional_metric/4

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.


Tony Pitale seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

NewRelic.report_dimensional_metric(:count, "my_metric_name", 1, %{some: "attributes"})
```
"""
defdelegate report_dimensional_metric(type, name, value, attributes \\ %{}),
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like there should be other functions that call this, as well. Similar to how incrementing a timeslice metric works.

@@ -229,6 +229,17 @@ defmodule NewRelic do
defdelegate report_custom_event(type, attributes),
to: NewRelic.Harvest.Collector.CustomEvent.Harvester

@doc """
Report a Dimensional Metric.
Valid types: `:count`, `:gauge`, and `:summary`.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably should add some docs (or a link) on when to use each of the types.


# API

@spec report_dimensional_metric(:count | :gauge | :summary, atom() | binary(), any, map()) ::
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should I remove the typespec?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like a pretty detailed spec would be worth it on the function inside NewRelic module, I'd go beyond what's here to help folks understand what the value should be for the various types

]
end

defp common(%{start_time_ms: start_time_ms}) do
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made the start time the common timestamp for any metrics in this harvest cycle. I don't think there is any accuracy below (or even near to) 5 seconds when querying.

assert common["timestamp"] > 0

assert length(metrics) == 2
[metric1, metric2] = metrics
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This points out that the metrics are coming in reverse order because that's how they're added. Not sure if it is worth swapping the order, or reversing the metrics at gather time.


@spec report_dimensional_metric(:count | :gauge | :summary, atom() | binary(), any, map()) ::
:ok
def report_dimensional_metric(type, name, value, attributes) when type in @valid_types do
Copy link
Collaborator

@binaryseed binaryseed Jan 23, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The main risk I'm noticing in this PR is that report_dimensional_metric collects all the metrics and sends every one up, while report_custom_metric will aggregate each metric during the harvest cycle and only sends each one once.

Which behavior do you want this function to have? I suspect most folks would assume the aggregated behavior. For example, if someone calls this function for every single request, we wouldn't want to copy the name & attributes potentially thousands of times a second..

If you did want that behavior, perhaps send_dimensional_metric is more expressive of what will happen?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. I definitely agree that it's not what I would expect.

What are your thoughts on efficiently storing the map of attributes along with the metric name?

Eg I report metric_a with attributes a, b, and c. Then I report metric_a with attributes c, d, and e.

Do I merge the maps? Or does that remain as two metrics?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could use something like phash2 on the attributes map and then aggregate matching ones.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that's right, the whole of the attributes makes up a unique metric. The other metric harvester has an optimized aggregation implementation you could steal

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So it would be keyed off of a tuple of the name and the hash of the attributes? The timeslice metric harvester uses the name and the scope, it seems.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep!

@tpitale
Copy link
Contributor Author

tpitale commented Sep 15, 2023

Closing in favor of: #408

@tpitale tpitale closed this Sep 15, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants