-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Decision Proposal 288 - Non-Functional Requirements Revision #288
Comments
The opening comment has been updated and the Decision Proposal is now attached. The Data Standards Body welcomes your feedback. |
In the recommended “consent metrics”, we can potentially look at splitting authorisation count for accounts between individual and non- individual entity. It can provide an indication of adoption among businesses |
On behalf of the ABA, can we please request an extra week for our Members to respond. Thank you. |
Of course, no problem. We'll extend the consultation until 7th April |
We recommend making the “GET /admin/metrics” endpoint publicly accessible without any authentication or protection. This change would provide numerous benefits to the ecosystem.
We believe that restricting access to the “GET /admin/metrics” endpoint only to the ACCC and individual data holders limits the potential benefits to the ecosystem. By allowing public access, ADRs and other stakeholders can make better-informed decisions and plan their approach to each data holder more effectively. |
Hi @damircuca Are you finding cases where the Get Status endpoint does not accurately represent implementation availability (i.e., because you encounter unexpected unavailability), or that there is not enough detail (on specific endpoint availability, for example) for it to be useful when initiating a series of requests? |
Hi @nils-work If you are able, please take a look at https://cdrservicemanagement.atlassian.net/servicedesk/customer/portal/2/CDR-3328 to see an example of when the Get Status endpoint has not worked sufficiently. You can see the failure reflected in the CDR performance dashboard screengrab below. Availability is apparently 100% but a 50% drop in API traffic is visible i.e. APIs are down. Specifically Error 500s on data retrieval APIs. |
Thanks @jimbasiq, I'm not sure if I'll be able to access that ticket, but I'll check. As a general comment, and it may not have been the case, but my initial thinking is that a scheduled outage may produce this effect (I note the drop in traffic appears to be over a weekend). The Availability metric (at ~100%) would not be affected by a scheduled outage, but any invocations (resulting in 500s) may still be recorded and reported in Metrics (though there is not an expectation of this during an outage). This makes it appear that either the Status If it was an unexpected outage, the Status response should have been |
Hey @nils-work generally the more data that is available the more options we have on how to incorporate it within the delivery of CDR services. The Get Status endpoint is very coarse, and doesn't provide enough depth for us. Whereas metrics has a lot more detail that can be used to better support customers, implement a more refined CDR CX flow, and also help with fault resolution. Even now, whenever a support ticket is raised, we find ourselves going straight to the metrics report (available via CDR site) to see what the lay of the land looks like before we respond back to our customers. Further to this, even with scraping we have bee surfacing metrics such as performance metrics, stability percentages and more, which we find valuable to drive decisions and future enhancements. Realise it may be a big ask, but it will be valuable - and will also raise the transparency and accountability within the ecosystem which is equally important for making a success. |
Rather than opening up the Get Metric endpoint to the public, I think it is worthwhile to allow the public to sign up and download raw data from Get Metrics that ACCC CDR collects, this will still place the responsibility of chasing non responding Data Holder Brands with CDR and wont flood the Data Holders brands with Get Metrics request. As a side note stemming out of ADRs hitting Maximum TPS tresholds, I think it is also worthwhile to revisit batch CDR requests and whether there is a need for something like Get Bulk Transaction for banking if there is a use case for get fetch transaction periodically and for DSB to consider creating a best practice article on ZenDesk on ADR calling patterns, i.e. do you need to perform a get Customer Detail, get Account details every time you want to pull transactions down. Otherwise any increase in traffic threshold will be soaked up with "low value" calls and we will be forever chasing for more and more bandwidth. |
In response to @ranjankg2000:
Thank you for this feedback. This is a good idea to incorporate |
In response to @damircuca: Making the metrics API public is a very interesting idea. The DSB will discuss this internally with Treasury and the ACCC to identify any policy reasons why this would not be possible. There are real no technical reasons why this would be an issue provided there were low non-functional requirements that would ensure Data Holder implementations didn't need to be over-scaled. The other option provided @CDRPhoenix, where the data is made available from the Register is also something that could be investigated. |
One thing to consider which CDRPheonix touched on was to open up the data that ACCC collects vs forcing the Data Holders to make changes on their end. Sorry for stating the obvious your likely considering this already 🤷🏻♂️ |
The ACCC supports the changes outlined in Decision Proposal 288. These changes will improve the accuracy of the information captured through Get Metrics and better support the estimation of consumer uptake. The ACCC suggests a further change to the PerformanceMetrics value. Currently, it is proposed that this value be split into unathenticated and authenticated metrics. The ACCC suggests that splitting this value by performance tier (i.e. Unauthenticated, High Priority, Low Priority, Unattended etc.) would better align these measures with the metrics reported for invocations and averageResponse. This change would assist the ACCC’s monitoring of Data Holder’s compliance with the performance requirements. The ACCC notes suggestions by participants regarding the availability of Get Metrics data. As flagged by the DSB above, the ACCC will continue to collaborate with its regulatory partners to assess how Get Metrics data can most effectively enhance the CDR ecosystem but suggests that such measures should be considered separately from this decision. |
Overall this will be a large-sized change for Great Southern Bank to implement. Given we have already planned work up until July 2023, it would be much appreciated if the obligation date for this change can be at least 6 months once the decision is made. Issue: Error code mappings. Issue: Lack of consent metrics Issue: Scaling for large data holders. |
We are broadly supportive of proposal to uplift the CDR’s non-functional requirements as outlined in Decision proposal 288. This decision proposal describes a range of topics, and we suggest any proposed implementation schedule be priority driven with careful consideration given to improved consumer outcomes, ecosystem value and impact to data holders (i.e., cost, time, and complexity of implementation). Specific points of feedback as follows:
|
TPGT appreciates the opportunity to provide feedback in relation to Decision Proposal 288. Please find our feedback attached. |
AEMO thanks you for the opportunity to respond to this Decision Proposal In terms of feedback to the getMetrics API, AEMO has the following comments:
While we accept that AEMO is obliged to service every request it receives, there are some observations we have already made that may improve the ADRs’ experience of this service: |
6 April 2023 Submitted on Thursday, 6 April 2023 via: [Consumer Data Standards Australia - GitHub site](#288) Dear @CDR-API-Stream ABA Response to Decision Proposal 288 – Non-Functional Requirements The Australian Banking Association (ABA) welcomes the opportunity to respond on behalf of our Members regarding DP 288 Non-Functional Requirements. The ABA has met with Members to discuss DP 288 in more detail and provides the following feedback. A point raised by Members centred on Dynamic Client Registration (DCR) Response Time - NFR. Members noted that current CD Standards (the Standards) response times for DCR can prove challenging to comply with, given the additional latency taken by the Accredited Data Recipient’s (ADRs) to undertake the registration request JWT validation required by the Standards. This additional latency time which Data Holders (DH) rely on, in relation ADRs to outbound connection whitelisting is not split out from times noted in the Standards. We propose that the DSB further reconsiders amending response times to reflect this. As a point of reference, we include a link to last year’s Dynamic Client Registration Response Time NFR #409. We note DP 288 confirms that the DCR will not be subject to change but reserved for a future direction of the Register Standards. We ask the DSB to reconsider our Members’ position to address their concerns on this point as part of the development requirements in the DP 288. Equally, we do thank the DSB for providing clarity on the origins of the six new Consent Metrics (new authorisations) introduced into DP 288. Where the DSB explained that the ACCC requested these specific new Consent Metrics, so the ACCC can determine where customers are dropping off in the consent flow, and whether this is occurring at the ADR or DH ends. We propose an open discussion or workshop with the ACCC regarding their request for additional consent metrics as a way to understand and improve consent drop off rates. The cost and effort to add these metrics, when aggregated across all DHs is significant. We propose that a small number of DHs that between them cover most consent flow types (basically looking at different OTP delivery mechanisms), to volunteer and provide the metrics requested on a one-off basis as input for a study into improvement in consent flow UX, which is presumably what the ACCC want the metrics for in the first place. This would lead to a faster outcome and be cheaper for not only all DHs but also for the volunteers (as they would not be extending the Metrics API, only collating the data on a one-off basis). We also note that consent flow is likely to change radically because of Action Initiation and the introduction of FAPI 2.0 and RAR. Should the above volunteer proposal not be accepted, some Members have raised comment on the DP 288 section around Implementation Considerations, which includes the six new Consent Metrics. We note that the DSB acknowledged that it was prepared to ramp up the implementation schedule over an extended period. An initial proposal raised by the DSB was for five years being a potential period of implementation. The ABA welcomes this proposal by the DSB, to allow our Members to better resource and budget accordingly for these, and other priorities, including those planned for future CDR implementation (e.g., Action Initiation and Non-Bank Lending). We would also ask the DSB further considers how it would prefer to update the CX journey negative path. At any point along the customer journey, the customer can decide to cancel and there can be multiple reasons why (regardless of whether the customer is still at ADR side or DH side). Recognising it is not only about the customer hitting a technical issue and can’t continue with the journey. Ideally at the point of customer-initiated cancellation, data should be collected as to why the customer decides to cancel and it should be a “standardised” set of reasons Members can all report on. Currently (incl., ADR/DH) this data is not collected from customer when they cancel, as it is deemed as introducing friction when it is not on the DSB’s CX flow. We generally understand the DSB’s proposal to balance the requirement on TPS ceiling obligations tied to the number of consents held by each bank. Meaning this is intended by the DSB to be a fairer allocation of investment across individual banks. As opposed to setting a fixed figure which some smaller Members may result in excessive systems costs based on TPS ceiling measure which banks are not likely to reach. Members have expressed challenges on TPS thresholds around provisioning for peak times. Members have suggested further workshops be facilitated by the ACCC and DSB on how to address this matter on TPS and response time concerns and achieve a fair and reasonable model across all industries and emerging areas like Action Initiation. Members believed this approach could better serve reaching a resolution than direct feedback to a DP. We would rather have a staged lift in TPS that is tied to a realistic industry consensus forecast. If the increase is staged over a number of years, we would also like a mechanism to periodically revise the required TPS as more data becomes available. Alternatively, if a formulaic approach tied to consents is taken, we would expect that the formula be deployed in a manner that gives Members enough time to budget for and implement system uplifts to cater for increased TPS NFRs, including systems changes for third party service providers. We also propose that demand management is considered. For example, demand from ADRs could be spread across 24 hours, and not 3 hours in the early morning. This could be enforced through hourly quotas. Another consideration is restricting the number of times that slow moving data is queried. If a given data set is only updated daily, then this could be flagged with a new metadata field that the ADRs would have to respect and only request that data once a day. In conclusion, we note DP 288 raises challenges for smaller DHs around TPS and consents, with a few options raised by the DSB to remediate under the heading, Scaling for large DHs. One proposal includes, ‘increase in the site wide TPS and Session Count NFRs’. Some Members have requested evidenced based data that the DSB sees why, or foresees in the ecosystem to warrant a change, and the types of changes being proposed by the DSB. Further discussions or workshops with the ACCC and the DSB to discuss these NFRs and other appropriate matters to understand how this potential proposal could be applied efficiently, would benefit our Members. Given if it were applied by the DSB, to accommodate for a rise in TPS ceiling thresholds, this would likely result in significant investment to affected ABA Members. We thank the DSB again for the opportunity to respond on behalf of our Members, as we are equally thankful for the DSB extending our response date by a week. We look forward to continuing our engagement and thank the DSB for its support in these matters. Yours sincerely Australian Banking Association |
Please find attached feedback from Telstra |
Thanks everyone for all of the feedback. There was a lot that came in just before or over the Easter weekend. We are going through the feedback and will respond incrementally over the next couple of days. We will leave this consultation open during this time and for a further couple of days so that everyone can respond to what we will be proposing to take to the chair. |
Submitted via email on the 6th of April 2023 NAB Response to Decision Proposal 288 – Non-Functional RequirementsNational Australia Bank Ltd (NAB) welcomes the opportunity to respond to Decision Proposal 288 Non-Functional Requirements. Due to technical issues, we have not been able to submit our response via Github. As such, we provide our response to certain items below. Dynamic Client Registration As per previous GitHub issues listed below, we request that DCR performance threshold is increased. Whilst we acknowledge that further consultation into DCR is opened under Noting Paper 289 – Register Standards Revision, we request that increase in DCR performance threshold is implemented as a quick fix whilst a strategic plan forward is discussed as a part of Noting Paper 289. Scaling NFRs for Large Data Holders We suggest further workshops be facilitated by the DSB and ACCC on the how to address the TPS issue and achieve a fair and reasonable model across all industries and emerging areas like action initiation. We prefer to have a staged lift in TPS that is tied to a realistic industry consensus forecast. We also suggest that ADRs factor TPS thresholds to their implementations, as Data Holders should not be forced to invest into expanding their capabilities due to ADR implementation choices, i.e. using heavy batch processes to request data in bulk. As the API Availability threshold is set to 99.5% per month and API performance requirements enable fast data sharing, the ecosystem should be moving towards real-time on-demand data. API Response Times Based on the interesting points raised in the GitHub issue #566 (Optionality of critical fields is facilitating data quality issues across Data Holder implementations · Issue #566 · ConsumerDataStandardsAustralia/standards-maintenance · GitHub), we believe that NFRs should be enabling the data sharing ecosystem rather than constraining it. Current NFRs have been made binding without any extensive consultation or with any consideration of the unique challenges presented by legacy systems that hold CDR Data. We believe the focus of CDR at this stage should be on data quality and adoption rather than imposing arbitrary, restrictive performance requirements. As one of the CDR principles is that the experience should be commensurate to digital channels, API response times should also be aligned. We strongly recommend that each API performance threshold is increased by at least 1000ms. NFRs for incident response NAB is strongly of the view that service level agreements for incident response must consider implementations where multiple data holders (and potentially third parties) are involved. Such incidents take considerable amount of time, effort, and coordination between all involved parties. CDR service management portal should also be uplifted to allow multiple parties to work on an incident and have visibility into it. Impracticality of Current API Performance Requirements for Complex White Label ImplementationsContext With the acquisition of Citigroup’s consumer banking business, NAB is now the CDR Data Holder for white label credit cards issued under Card Services, Coles Financial Services, Kogan Money Credit Cards, Qantas Premier Credit Cards, Suncorp, Bank of Queensland, and Virgin Money Australia. Whilst some of these white label products are completely serviced by NAB (including CDR Data sharing and data sharing consent), some are serviced in partnership with other institutions including other ADIs that have their own separate CDR obligations. Further adding to the complexity, CDR data sharing was implemented using a third-party service provider. Figures 1 and 2 below visualise current implementations: When these solutions were implemented, the direction was to prioritise customer experience and consistency with existing digital (and non-digital) servicing models, with additional considerations including technical complexity, scalability, compliance deadlines and opportunities to improve existing channel integration. The understanding at the time was that the non-binding NFRs were to undergo a robust consultation prior to become binding and the consultation would factor in the complexity of white label arrangements, especially ones where multiple parties are involved to provide optimal customer experience. API Response Time Requirements Current NFRs for API response times are not achievable for white label implementations where one ADR facing party must integrate with multiple Data Holders to provide CDR Data. Currently, the API response times measure individual API response times, however in a complex white label implementation, there are multiple steps that need to be completed in the background to:
An additional consideration in this scenario is network latency, especially in instances where infrastructure of the involved parties is not in the same region or a country. This consequently means that even in a scenario where each individual Data Holder meets the prescribed API response times, the nature of the implementation means that the ADR facing API response time will be over the threshold. NAB is of the view that the issue could be addressed by increasing the API response time thresholds across the board, which we believe would have a broader positive impact on the ecosystem. It would alleviate NFR pressures on Data Holders, who are often in a position where they must make trade-offs to remain compliant with NFRs. NAB believes that the focus of the CDR ecosystem should remain on customer experience and adoption. Alternatively, the metrics reporting could be enhanced to allow ADR facing Data Holders to report metrics based on their own environment, with additional fields to report on data sharing metrics of another Data Holder that supplies CDR Data via a private integration. NAB would welcome the opportunity to contribute to a discussion regarding the development of new metrics applicable to complex white label arrangements. |
Considering the comments around the difficulty of Data Holder implementation whilst balancing other work and obligations, Basiq would be supportive of a phased delivery approach and further discussion is required to agree and prioritise "most useful" and "easier to implement" metrics. I would prefer to have several most useful metrics in three months rather than all metrics in 12 months. |
On the topic of a TPS metric. It is always going to be a challenge for Data Holders to "right size" their infrastructure in order to avoid negatively affecting a consumer. For instance crystal balls or true elastic scalability will be required to set TPS and Session Count NFRs based on the number of active authorisations the data holder has. Can I suggest the TPS metrics drives the ongoing obligation. i.e. Data Holders do not just report on TPS but on % utilisation of their current limit. If metrics show TPS is regularly exceeding a defined threshold (e.g 90%) the Data Holder should be obligated to raise their TPS. |
One last comment on the Consent Metrics. For abandonedConsentCount Could we get more granular than "for any reason", a Data Holder should be able to detect the different between:
|
Here are the proposed changes to the Non-Functional Requirements for further feedback. These are candidate changes to be proposed to the Chair unless there is feedback indicating they should change: Non-Functional Requirement ChangesTiering of Traffic ThresholdsAs there was consensus support for a tiered approach to traffic thresholds based on number of active authorisations the DSB is proposing amendments to the standards as outlined below. These thresholds have been developed from the data that the DSB has been able to obtain regarding actual TPS and authorisation metrics for existing data holders. The following statements in the standards in the
These statements will be replaced with:
Note that this will be a reduction in expectation for the vast majority of existing Data Holders and will be an increase in expectation for a small number of the most active Data Holders. It is proposed that these changes will be tied to a Future Dated Obligation of Obligation Date Y23 No. 5 (13/11/2023) NFRs for Low Velocity DataTo a differentiation for the calling by ADRs of low velocity data sets the following text will be added to the
As this change is really an expansion of the requirement that ADRs |
We are supportive of tiering to allow lower thresholds for Data Holders with fewer users, however we do not agree that the maximum TPS levels should change at this time. Any uplift of TPS beyond 300TPS has a large impact on Data holders. Given this, the proposal to introduce new tiers of up to 500TPS should be performed through a dedicated Decision Proposal. This will allow Data Holders visibility of this impactful change, and time to assess implementation considerations. |
I think there needs to be a distinction between the type of consumer that these non-functional requirements apply to. I refer in particular to rules around response times for the "Get Bulk" APIs, which have no upper limit as to the number of accounts expected in the response. This means, for example, that the expected response time for "Get Bulk Billing" for a consumer with 1 account is the same for a consumer with 10, 50, or 100 accounts, when in practice increasing the number of accounts naturally results in an increased response time due to the amount of data requested. I suggest reviewing the practicality of applying the same response time to every CDR customer. I note that these requirements seem written predominantly for retail/mass market consumers. It is possible for business customers in energy to have over 100+ accounts. |
Dear DSB, In light of the discussion today with the DSB and our Members, who are seeking further opportunity to provide additional feedback, can we please request extending the consultation for another week to 19 May 2023. Kindest Regards, |
We also agree with @AusBanking-Christos and would like to request for an extension of the consultation period to 19 May 2023. Kind regards, |
This consultation will be extended until the 19th May as requested. The DSB would prefer not extend this consultation any further beyond this date. We understand the need for modifications to NFRs to be an ongoing process and to be based on objective data. It would appear that this may require changes the NFRs to be supported by a more specific, regular and engaged consultation process. To that end we are planning a series of workshops specifically on NFRs for the ecosystem in late July or early August. These workshops will be used to work with the community to create an ongoing consultation process for NFRs that works for everyone as well as to canvas the community about any issues and solutions related to the NFR standards that the community wishes to raise. More details on these workshops will be announced in due course. |
Comment on the NFRs for low velocity data:
|
The feedback for this DP is significant making it difficult to comment further. What I'll note here is that it appears to discuss both NFR and Metrics endpoints simultaneously when the reality is that they are two separate spheres. Essentially NFRs set the thresholds and are likely to be more structurally architecture components while Metrics report on them which is more of an engineering activity. I suggest a more focused pair of DPs is proposed so that feedback can more easily be targeted on the specific areas. |
18 May 2023 Submitted on Thursday, 18 May 2023 via: Consumer Data Standards Australia - GitHub site Dear @CDR-API-Stream ABA Follow Up Response to Decision Proposal 288 – Non-Functional Requirements ABA welcomes the plan for a series of workshops on NFRs for the CDR ecosystem. The proposed multistakeholder approach will lay the foundations for a shared and transparent capacity planning framework that balances the needs of all participants while ensuring appropriate customer outcomes. The workshops will provide the opportunity for richer performance data to be assessed when setting NFR standards. ABA member banks commit to working with the DSB ahead of the meeting to identify a consistent data set that will be the most useful contribution to the workshop process. We welcome the opportunity to contribute toward the development of NFR standards that will result in a sustainable and predictable capacity planning model for all CDR participants. |
Basiq feedback on the Traffic Thresholds proposed amendment is we are generally supportive but still concerned with the upper boundary. A highest limit dictated in
seems low considering Basiq currently have considerably more than 50,000 screen scrape active authorisations with each of the major banks, some in the hundreds of thousands. We intend to move all of these connections from screen scrape to open banking CDR connections. If CDR intends to move data sharing from screen scraping to CDR it needs to both support the existing load and provide some overhead. I don't believe the current proposal is doing this. |
Thank you for the opportunity to provide feedback on this area of discussion. AGL does not support the tiering of thresholds for TPS for energy. This is because:
AGL requests that these proposed changes are delayed and revisited (for energy) until such time that at least twelve months of real-world traffic volumes have been observed following Tranche 3 Large Retailer Go Live. (November 2024) |
Regarding additional consent metrics, CBA suggests a similar outcome, i.e. improvements to the consumer experience for consent flow to reduce drop-off rates, could be achieved through a consultative approach with ADRs and DHs. Our recommendation is that a sample of relevant consent metrics be amalgamated by participants and provided to the DSB as input. This approach would more cost effective for the ecosystem, achieve a similar outcome and avoid regret spend if authentication and consent flows are matured to enable Action Initiation in the future. |
In light of recent discussions, ANZ requests that the proposed tiering remains conditional upon the outcome of the forthcoming workshops. Given the complex nature of open banking systems, meeting the revised tiering is unlikely to be a simple scaling out exercise. The workshops must consider that data holders will require extensive capacity planning, design and implementation activities. |
NAB welcomes the plan for further workshops on NFRs. As the topic appears to be of great interest to CDR community, we recommend these workshops are scheduled sooner rather than later to maintain the positive momentum. We also acknowledge DSB feedback regarding white label implementations and are keen to engage with DSB and any other interested participants to explore the topic in detail. With regards to the proposed Get Metrics future dated obligations, we request that they are pushed back by one release cycle, i.e. that the v4 FDO is aligned with Y24 #1 (11/03/2024) and v5 is aligned with Y24 #2 (13/05/2024). |
Westpac welcomes the opportunity to respond to the additional proposals added to DP288. Scaling NFRs for Large Data Holders A tiered approach by activity is an improvement to the current standard. Nevertheless, Westpac suggests that the proposal needs to be evidence-based prior to structuring the tiering levels and thresholds. Our evidence suggests that current activity in the ecosystem do not warrant the unusually high thresholds in the current proposal. We welcome the opportunity to discuss the TPS proposal in the planned workshops for July-August and we support earlier comments that this is not ready for presentation to the DSB chair. Westpac notes that it is difficult to set a fair and adequate TPS level without context of the use-cases that the ecosystem is wanting to support; since some use-cases require more load than others. We suggest that focus should be on activity growth in the medium term future only. Handling of larger volumes can be revisited as the ecosystem matures with clearer pipeline of future use-cases and activity types flowing in the ecosystem. This would allow better allocation and direction of investment that is aligned to the Govt intention as announced in the recent Budget. Proposed changes to existing metrics Westpac is broadly supportive of the proposed changes to existing metrics. Proposed new metrics Westpac notes from various comments above that there may be various uses to the statistics around ‘abandonment by stage’ by different parties within the ecosystem (regulators, ADRs, DHs, incumbents, and prospects). We suggest the following improvements to increase the value of new metrics prior to implementing changes:
Westpac also notes that there are many comments and questions around the definition of the metrics that needs to be discussed and resolved prior, to presentation to the DSB Chair. Considering the nature and size of the change vary depending on these definitions, it would be more appropriate to set the delivery timelines after the conclusion of the discussions or workshops. We ask that in light of the current backlog of standard changes, 9 months be provided as minimum to allow organisations to budget resource and deliver. The ecosystem cannot sustain ongoing urgent revisions to standards as we have recently experienced with FAPI 1.0 |
Thank you for this opportunity to make a submission. EnergyAustralia submits the following: With the energy sector only so recently going live the existing NFRs for us as a Data Holder remain untested by ADR usage seen to-date. Therefore the need to revise the NFRs so dramatically, and then to apply these to energy sector would be premature. We are aligned with the AGL submission made on this topic that is reflective of the energy sector. It appears that a staged approach to retain the existing NFRs for Energy may well prove more suitable to support a nascent CDR sector like Energy. This will avoid the risk of over-funding capacity. A sectorial approach should be based on CDR usage statistics from the Energy sector so when it reaches the maturity of banking sector it would see it move to the next stage of NFRs. It would then see more appropriate NFRs for more mature sectors and existing NFRs for new to CDR sectors like Energy for their first two years. Publication of metrics of overall usage remains of benefit. However more detailed publication of NFR metrics on such small numbers of usage is not really presently of industry benefit (until the 2 year point) following their CDR implementation, and only if volumes increase. Such limited usage will skew the usage to potentially mis-represent any valid conclusions drawn. Further. we specifically endorse the final paragraph from the AGL submission on AEMO performance that concludes with “AGL considers that it would be appropriate for AEMO to establish its own service desk arrangement for the resolution of tickets directly with ADRs and reduce administrative pressures on data holders to manage these issues.” |
Ecosystem metrics data exposed by participants via the GetMetrics API is published by the ACCC on the CDR Performance Dashboard. Several of the changes proposed here will result in breaking changes for the Performance Dashboard, including breaking continuity of historical data and the ability to compare metrics between versions, which is a mandatory requirement for us to operate and regulate the system. While breaking changes are sometimes necessary to advance the API and enrichen metrics data over time, the In reference to the JSON schemas for v4 and v5 as posted in the above comment, we provide the following feedback:
|
Thanks everyone for the final feedback. This thread will now be locked and responses will be posted and a final decision created for submission to the chair. Note that the final decision on TPS thresholds may take a few more days as the DSB have been offered additional data that may influence the specifics of the thresholds to be set. |
Response to feedback on low latency data clusters:
In response to the question from AEMO: yes, the current proposal would allow for each page of usage history to be called up to 10 times per day. If the threshold needs to be adjusted based on actual experience that can be consulted on in the future. In the interim this should allow for calls being made every couple of minutes to be managed to protect core systems |
Response to feedback on changes to the Get Metrics API: Additional Changes
Implementation Considerations
Responses to other feedback Some participants suggested other mechanisms for providing data rather than updating the Get Metrics API. We have not modified our proposal in response to this feedback for two reasons:
The suggestion that authorisation abandonment metrics should be aligned to software product has not be incorporated into the proposal. Doing so would increase implementation costs but would also be of minimal value as the proposed metrics only come into play once software product process has successfully completed (ie. the customer has already accepted the proposed consent presented by the ADR). The metrics are only representative of the data holder screens which are common across all software products. We may consider this feedback in the future if the concept of data recipient metrics is introduced to CDR. It was suggested that energy retailers should not be required to provide metrics until the 2 year point of implementation. This is not consistent with the requirements applied to the banking sector in the past but, more importantly, is not a question being considered in this consultation so no action is being taken on this feedback. |
The DSB has received data from multiple banks related to the number of active authorisations and TPS levels. Each of the banks that provided detailed data ask that it be kept confidential so it will not be published but it does give a stronger evidence foundation for setting the TPS tiering. As a result of this data it would appear that our initial proposal (which was based only on number of customers) was far too aggressive and should be altered significantly. The new proposed tiering for site wide authenticated peak TPS will therefore be:
Note that the implications of this tiering strategy is as follows:
Responses to specific feedback by participants is as follows:
|
The Data Standards Chair has approved Decision 288 attached below: It is intended that this decision will be published in the standards in v1.25.0 in the next two weeks. |
This decision proposal contains a proposal for changes to Non-Functional Requirements and the Get Metrics end point based on feedback received through various channels.
The decision proposal is embedded below:
Decision Proposal 288 - Non-Functional Requirements Revision.pdf
Consultation on this proposal will close on the 7th April 2023.Note that the proposed changes have been republished in this comment and this comment
Consultation will be extended to obtain feedback on these updated changes until the
5th May 202312th May 202319th May 2023.The text was updated successfully, but these errors were encountered: