diff --git a/changelog.rst b/changelog.rst new file mode 100644 index 0000000..514348d --- /dev/null +++ b/changelog.rst @@ -0,0 +1,8 @@ +============ +Change Log +============ + +10.9.0.6 +--------- + ++ Added support for testing the performance of EPL apps. Check the documentation for more details. diff --git a/doc/performance-testing.rst b/doc/performance-testing.rst new file mode 100644 index 0000000..571abac --- /dev/null +++ b/doc/performance-testing.rst @@ -0,0 +1,298 @@ +===================================================== +Testing the performance of your EPL apps +===================================================== +:Description: Guide for using the PySys framework to test the performance of your EPL apps. + +Introduction +============ + +An EPL app can be either tested against real devices or simulated devices with simulated data. Thus writing a performance test will generally involve: + ++ Creating a test. ++ Defining the test options. ++ Preparing the Cumulocity IoT tenant. ++ Creating device simulators (if applicable). ++ Deploying EPL apps. ++ Sending measurements. ++ Monitoring the performance. ++ Generating the performance reports. + +This document demonstrates the common process involved in writing a performance test for your existing EPL apps. The performance tests described in the document use the EPL apps SDK based on the PySys test framework. See the `PySys documentation `_ for details on the installation, and how the framework can be used and the facilities it contains. Set up the EPL apps SDK by following the steps mentioned in :ref:`Setup for testing in the Cumulocity IoT cloud `. + +Writing a performance test +=========================== + +Creating a test +---------------- +A PySys test case comprises a directory with a unique name, containing a pysystest.xml file and an Input directory containing your application-specific resources. + +To create the test, you can either copy an existing test (such as one from the samples-performance directory) and rename it, or by running the following: + +.. code-block:: shell + + pysys make TestName + +The run.py file of the test contains the main logic of the test. The ``PySysTest`` class of a performance test should extend the ``apamax.eplapplications.perf.basetest.EPLAppsPerfTest`` class which provides convenient methods for performance monitoring and reporting. + +.. code-block:: python + + from apamax.eplapplications.perf.basetest import EPLAppsPerfTest + + class PySysTest(EPLAppsPerfTest): + def execute(self): + ... + + def validate(self): + super(PySysTest, self).validate() + +.. _test-options: + +Defining the test options +--------------------------- +A test may define options such as test duration, or measurement types that you might want to change or override when running the test. To define an option, define a static attribute on the test class and provide a default value. For example: + +.. code-block:: python + + class PySysTest(EPLAppsPerfTest): + # Can be overridden at runtime, for example: + # pysys run -XmyTestDuration=10 + myTestDuration = 60.0 + + def execute(self): + self.log.info(f'Using myTestDuration={self.myTestDuration}') + ... + +Once the default value is defined with a static attribute, you can override the value when you run your test using the ``-X`` option: + +.. code-block:: shell + + pysys run -XmyTestDuration=10 + +See the `PySys test options `_ in the PySys documentation for details on configuring and overriding test options. + +Preparing the Cumulocity IoT tenant +------------------------------------ +The performance test must make sure that the Cumulocity IoT tenant used for testing the EPL app is prepared. This is done by calling the ``prepareTenant`` method before the EPL apps are deployed. + +The ``prepareTenant`` method performs the following actions: + ++ Deletes any test devices created by previous tests (which are identified by the device name having the prefix "PYSYS\_") from your tenant. ++ Deletes any test EPL apps (which have "PYSYS\_" prefix in their name) that have previously been uploaded by the framework from your tenant. ++ Clears all active alarms in your tenant. ++ Optionally, restarts the Apama-ctrl microservice. + +The ``prepareTenant`` method must be called at the start of the test before any EPL apps are deployed. If the test is testing the same EPL app with different configurations, then the tenant must be prepared before each iteration. + +It is recommended to restart the Apama-ctrl microservice when preparing a tenant so that resources like memory are not influenced by any previous test runs. + +The ``prepareTenant`` method does not delete any user-uploaded EPL apps or user-created devices. The user should disable any user-uploaded EPL apps which can interfere with the performance test, for example by producing or updating data which are consumed by the EPL apps being tested. It may be prudent to disable all existing EPL apps from the tenant for accurate performance numbers. + +Creating device simulators +--------------------------- +If the test needs to use simulated devices, then they can be easily created within the test. A device can be created by calling the ``createTestDevice`` method. + +All created devices are prefixed with "PYSYS\_" for identifying the devices that have been created from the test and keeping them distinct from user-created devices. Due to the prefix, all devices created using the ``createTestDevice`` method are deleted when the ``prepareTenant`` method is called. + +If devices are created without using the ``createTestDevice`` method, then make sure to have the device names prefixed with "PYSYS\_" so that they can be deleted when a tenant is prepared for a performance test run. + +Deploying EPL apps +------------------- +EPL apps can be deployed by using the ``deploy`` method of the ``EPLApps`` class. The field ``eplapps`` of type ``EPLApps`` is available for performance tests. + +The performance test may need to customize EPL apps for performance testing, for example, defining the threshold limit, or the type of measurements to listen for. The performance test may also test EPL apps for multiple values of some parameters in a single test or across multiple tests. One approach to customize EPL apps for testing is to use placeholder replacement strings in EPL apps and then replace the strings with actual values before deploying them to Cumulocity IoT. For example:: + + monitor MySimpleApp { + constant float THRESHOLD := @MEASUREMENT_THRESHOLD@; + constant string MEAS_TYPE := "@MEASUREMENT_TYPE@"; + ... + } + +In the above example app, the values of the ``THRESHOLD`` and ``MEAS_TYPE`` constants are placeholder strings that need to be replaced by the performance test. It is recommended to surround the replacement strings with some marker characters so that they are distinct from normal strings. + +The ``copyWithReplace`` method creates a copy of the source file by replacing the placeholder strings with the replacement values. + +For example, the above EPL app can be configured and deployed as follows: + +.. code-block:: python + + # Create a dictionary with replacement strings. + appConfiguration = { + 'MEASUREMENT_THRESHOLD': '100.0, + 'MEASUREMENT_TYPE': 'myMeasurements, + } + # Replace placeholder strings with replacement values and create + # a copy of the EPL app to the test's output directory. + # Specify that the marker character for placeholder strings is '@' + self.copyWithReplace(os.path.join(self.project.EPL_APPS, 'MyApp.mon'), + os.path.join(self.output, 'MyApp.mon'), replacementDict=appConfiguration, marker='@') + + # Deploy the EPL app with replaced values. + self.eplapps.deploy(os.path.join(self.output, "MyApp.mon"), name='PYSYS_MyApp', redeploy=True, + description='Application under test, injected by test framework') + +Replacement values can also come from test options so that they can be overridden when running tests. See `Defining the test options`_ for more details. + +**Note:** It is recommended to prefix the names of the EPL apps with "PYSYS\_" when deploying them. This allows all EPL apps deployed during the tests to be disabled at the end of the test and deleted when preparing the tenant for a test run. + +Sending measurements +------------------------ +A performance test can either use real-time measurements from real devices or simulated measurements from simulated devices. To generate simulated measurements, the test can start measurement simulators to publish simulated measurements to Cumulocity IoT at a specified rate which is then consumed by the EPL apps being tested. + +Different tests may have different requirements for the measurements being published. For example, a test may want to customize the type of measurements or range of measurement values. To support such requirements, the framework requires tests to define a measurement creator class to create measurements of desired types. A measurement simulator uses a measurement creator object to create measurements to publish to Cumulocity IoT. + +The following example shows a test defining a measurement creator class to create measurements within a configurable range: + +.. code-block:: python + + # In the 'creator.py' file in the test Input directory. + import random + from apamax.eplapplications.perf import ObjectCreator + + class MyMeasurementCreator(ObjectCreator): + def __init__(lowerBound, upperBound): + self.lowerBound = lowerBound + self.upperBound = upperBound + + def createObject(self, device, time): + return { + 'time': time, + "type": 'my_measurement', + "source": { "id": device }, + 'my_fragment': { + 'my_series': { + "value": random.uniform(self.lowerBound, self.upperBound) + } + } + } + +Once the measurement creator class is defined, the test can start a measurement simulator process to generate measurements for specified devices with a specified rate per device by calling the ``startMeasurementSimulator`` method. The test needs to pass the path to the Python file containing the measurement creator class, the name of the measurement creator class, and the values for the constructor parameters. + +For example, a test can use the above measurement creator class to generate measurements in the range of 50.0 to 100.0: + +.. code-block:: python + + # In the run.py file of the test + class PySysTest(EPLAppsPerfTest): + ... + def execute(self): + ... + self.startMeasurementSimulator( + ['12345', '12346'], # Device IDs + 1, # The rate of measurements to publish per device per second + f'{self.input}/creator.py', # The Python file path containing the MyMeasurementCreator class + 'MyMeasurementCreator', # The name of the measurement class + [50, 100], # The constructor parameters for the MyMeasurementCreator class + ) + ... + +Monitoring the performance +--------------------------- +The framework provides support for monitoring standard resource metrics of the Apama-ctrl microservice and EPL apps. The performance monitoring can be started by calling the ``startPerformanceMonitoring`` method. + +The framework repeatedly collects the following raw resource metrics: + ++ Aggregate physical memory usage of the microservice (combination of the memory used by the JVM helper and the Apama correlator process). ++ Aggregate CPU usage of the microservice in the most recent period. ++ Size of the correlator input queue. ++ Size of the correlator output queue. ++ The total number of events received by the correlator during the entire test. ++ The total number of events sent from the correlator during the entire test. + +The CPU usage of the microservice is the total CPU usage of the whole container as reported by the OS for the cgroup of the entire container. + +These metrics are then analyzed (mean, median, etc.) and used for graphing when the performance report is generated at the end of the test. + +The test should wait for some time for performance metrics to be gathered before generating the performance report. It is a good practice to define the duration as a test option so that it can be configured easily when running a performance test. + +Generating the performance report +---------------------------------- +Once the test has waited for the specified duration for the performance metrics to be collected, it must call the ``generateHTMLReport`` method to enable the generation of the performance report in the HTML format. The performance report (report.html) is generated at the end of the test in the test's output directory. + +If the test is testing the same EPL app with different configurations, then the ``generateHTMLReport`` method must be called at the end of each iteration. The performance report contains the result of each iteration. + +Test configuration details can also be included in the report. The test should provide the values of all test options and test-controlled variables so that they are visible in the report. + +In addition to the standard performance metrics, the HTML report can also contain additional performance metrics provided by the test, such as the number of alarms raised. + +For example: + +.. code-block:: python + + self.generateHTMLReport( + description='Performance of MyExample app', + # Test configurations and their values + testConfigurationDetails={ + 'Test duration (secs)': 30, + 'Measurement rate': 10, + }, + # Extra performance metrics to include in the report. + extraPerformanceMetrics={ + 'Alarms raised': alarms_raised, + 'Alarms cleared': alarms_cleared, + }) + +Running the performance test +============================= +Performance tests can only be run using a Cumulocity IoT tenant with EPL apps enabled. Set up the framework to use a Cumulocity IoT tenant by following the steps mentioned in :ref:`Setup for testing in the Cumulocity IoT cloud `. + +When running a test, test options can be overridden by using the ``-X`` argument. See `Defining the test options`_ for details on defining and providing test options. + +For example, to change the test duration of the ``AlarmOnThreshold`` test, run the following: + +.. code-block:: shell + + pysys run -XtestDuration=180 AlarmOnThreshold + +At the end of the test, a basic validation of the test run is performed. See `PySys helpers `_ in the EPL Apps Tools documentation for details on validations performed. + + +Performance report +================================ +At the end of a performance test, an HTML report is generated in the test's output directory. When running multiple iterations of the same EPL app with different configurations, the results of each iteration are included in the report. The report contains metadata about the microservice and Cumulocity IoT environment, test-specific configurations, performance summary, and graphs. + +The report contains the following metadata about the microservice and the Cumulocity IoT environment: + ++ Cumulocity IoT tenant URL ++ Cumulocity IoT platform version ++ Apama-ctrl microservice name ++ Apama-ctrl microservice version (product code PAQ) ++ Apama platform version (product code PAM) ++ Microservice resource limits + +The report also contains test-specific configurations specified when calling the ``generateHTMLReport`` method. This usually contains all possible test-controlled variables. + +The report contains min, max, mean, median, 75th percentile, 90th percentile, 95th percentile, and 99th percentile of the following standard performance metrics: + ++ Total physical memory consumption of the microservice (MB) ++ JVM helper physical memory consumption (MB) ++ Apama correlator physical memory consumption (MB) ++ Correlator input queue size ++ Correlator output queue size ++ Correlator swap rate ++ Total CPU usage of the whole container (milliCPU) + +Additionally, the report contains the following standard performance metrics and any extra performance metrics supplied by the test: + ++ Total number of events received into the Apama correlator ++ Total number of events sent from the Apama correlator + +The report also contains the following graphs over the duration of the test: + ++ Correlator input queue and output queue size ++ Total microservice memory consumption, JVM helper memory consumption, and Apama correlator memory consumption ++ Microservice CPU usage + +The summary of the various performance metrics and graphs provides an overview of how the microservice performed during the test run and how it varies for different configurations and workloads. + +Sample EPL apps and tests +========================= +Multiple sample EPL apps and tests can be found in the samples-performance directory of the EPL Apps Tools SDK. The structure of the samples-performance directory is as follows: + +| +--samples-performance +| +-----pysysdirconfig.xml +| +-----pysysproject.xml +| +-----apps/ +| +-----correctness/ +| +-----performance/ + +The apps directory contains multiple sample apps for performance testing. The correctness directory contains basic correctness tests of the sample apps. It is recommended to always test your EPL apps for correctness before testing them for performance. See :doc:`Using PySys to test your EPL apps ` for details on testing EPL apps for correctness. The performance directory contains performance tests for each sample app. These tests can be run as explained in `Running the performance test`_. diff --git a/doc/pydoc/_modules/apamax/eplapplications/basetest.html b/doc/pydoc/_modules/apamax/eplapplications/basetest.html index 0ee9052..e640a6f 100644 --- a/doc/pydoc/_modules/apamax/eplapplications/basetest.html +++ b/doc/pydoc/_modules/apamax/eplapplications/basetest.html @@ -4,7 +4,7 @@ - apamax.eplapplications.basetest — EPL Apps Tools 10.9.0.1 documentation + apamax.eplapplications.basetest — EPL Apps Tools 10.9.0.6 documentation @@ -26,7 +26,7 @@

Navigation

  • modules |
  • - + @@ -54,41 +54,49 @@

    Source code for apamax.eplapplications.basetest

    < import urllib.request import xml.etree.ElementTree as ET import os +import urllib import inspect import hashlib sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))))) from apamax.eplapplications.eplapps import EPLApps from apamax.eplapplications.platform import CumulocityPlatform from apamax.eplapplications.connection import C8yConnection +from datetime import datetime APPLICATION_NAME = 'pysys-test-application' APPLICATION_KEY = 'pysys-test-key'
    [docs]class ApamaC8YBaseTest(BaseTest): """ - Base test for EPL Applications tests. + Base test for EPL applications tests. - Requires the following to be set on the project in pysysproject.xml file (typically from the environment): + Requires the following to be set on the project in the pysysproject.xml file (typically from the environment): - EPL_TESTING_SDK - - APAMA_HOME - only if running a local correlator + - APAMA_HOME - Only if running a local correlator. """
    [docs] def setup(self): super(ApamaC8YBaseTest, self).setup() + # Check EPL_TESTING_SDK env is set + if not os.path.isdir(self.project.EPL_TESTING_SDK): + self.abort(BLOCKED, f'EPL_TESTING_SDK is not valid ({self.project.EPL_TESTING_SDK}). Please set the EPL_TESTING_SDK environment variable.') + self.modelId = 0 - self.TEST_DEVICE_PREFIX = "PYSYS_" + self.TEST_DEVICE_PREFIX = "PYSYS_" + self.EPL_APP_PREFIX = self.TEST_DEVICE_PREFIX # connect to the platform self.platform = CumulocityPlatform(self)
    [docs] def createAppKey(self, url, username, password): """ Checks if the tenant has an external application defined for us and if not, creates it. - :param url: The URL to the Cumulocity tenant. + + :param url: The URL to the Cumulocity IoT tenant. :param username: The user to authenticate to the tenant. :param password: The password to authenticate to the tenant. - :return: A app key suitable for connecting a test correlator to the tenant. + :return: An app key suitable for connecting a test correlator to the tenant. """ try: conn = C8yConnection(url, username, password) @@ -103,12 +111,13 @@

    Source code for apamax.eplapplications.basetest

    <
    [docs] def createProject(self, name, existingProject=None): """ - Create a ProjectHelper object which mimics the Cumulocity EPL applications environment. + Creates a `ProjectHelper` object which mimics the Cumulocity IoT EPL applications environment. - Adds all the required bundles and adds the properties to connect and authenticate to the configured Cumulocity tenant. + Adds all the required bundles and adds the properties to connect and authenticate to the configured Cumulocity IoT tenant. - :param name: The name of the project + :param name: The name of the project. :param existingProject: If provided the path to an existing project. The environment will be added to that project instead of a new one. + :return: A `ProjectHelper` object. """ # only import apama.project when calling this function which requires it try: @@ -131,9 +140,10 @@

    Source code for apamax.eplapplications.basetest

    < return apama_project
    [docs] def addC8YPropertiesToProject(self, apamaProject, params=None): - """adds the connection parameters into the project + """Adds the connection parameters into a project. - :param params: dictionary to override and add to those defined for the project:: + :param apamaProject: The `ProjectHelper` object for a project. + :param params: The dictionary of parameters to override and add to those defined for the project:: <property name="CUMULOCITY_USERNAME" value="my-user"/> <property name="CUMULOCITY_PASSWORD" value="my-password"/> @@ -171,10 +181,11 @@

    Source code for apamax.eplapplications.basetest

    <
    [docs] def getTestSubjectEPLApps(self): """ - Retrieves a list of paths to EPL App(s) being tested. - If the user defines the <user-data name="EPLApp" value="EPLAppToBeTested"/> tag in the pysystest.xml file, - then we just return the EPL App defined by the tag's value.  - If this tag is not defined (or the value is an empty string) then all the mon files in project.EPL_APPS directory are returned. + Retrieves a list of paths to the EPL apps being tested. + + If the user defines the `<user-data name="EPLApp" value="EPLAppToBeTested"/>` tag in the pysystest.xml file, + then we just return the EPL app defined by the tag's value. If this tag is not defined (or the value is an empty string) + then all the mon files in the project.EPL_APPS directory are returned. """ # Check EPL_APPS env is valid if not os.path.isdir(self.project.EPL_APPS): @@ -192,7 +203,7 @@

    Source code for apamax.eplapplications.basetest

    < else: eplAppsPaths.append(os.path.join(self.project.EPL_APPS, eplApp)) else: - # If user has not defined EPLApp in pysystest.xml, return all files in project.EPL_APPS by default  + # If user has not defined EPLApp in pysystest.xml, return all files in project.EPL_APPS by default eplAppsFiles = os.listdir(self.project.EPL_APPS) for eplApp in eplAppsFiles: # Check file is a .mon file before appending @@ -210,6 +221,161 @@

    Source code for apamax.eplapplications.basetest

    < # Delete devices that were created by tests self._deleteTestDevices()
    +
    [docs] def createTestDevice(self, name, type='PySysTestDevice', children=None): + """ + Creates a Cumulocity IoT device for testing. + + :param str name: The name of the device. The name of the device is prefixed with `PYSYS_` so that the framework can identify and clean up test devices. + :param type: The type of the device. + :type type: str, optional + :param children: The list of device IDs to add them as children to the created device. + :type children: list[str], optional + :return: The ID of the device created. + :rtype: str + """ + device = { + 'name': f'{self.TEST_DEVICE_PREFIX}{name}', + 'c8y_IsDevice': True, + 'type': type, + 'com_cumulocity_model_Agent': {} + } + id = self.platform.getC8YConnection().do_request_json('POST', '/inventory/managedObjects', device) + + children = children or [] + for child in children: + self.platform.getC8YConnection().do_request_json('POST', f'/inventory/managedObjects/{id}/childDevices', {'managedObject': {'id': child}}) + return id
    + +
    [docs] def getAlarms(self, source=None, type=None, status=None, dateFrom=None, dateTo=None, **kwargs): + """ + Gets all alarms with matching parameters. + + For example:: + + self.getAlarms(type='my_alarms', dateFrom='2021-04-15 11:00:00.000Z', + dateTo='2021-04-15 11:30:00.000Z') + + :param source: The source object of the alarm. Get alarms for all objects if not specified. + :type source: str, optional + :param type: The type of alarm to get. Get alarms of all types if not specified. + :type type: str, optional + :param status: The status of the alarms to get. Get alarms of all status if not specified. + :type status: str, optional + :param dateFrom: The start time of the alarm in the ISO format. If specified, only alarms that are created on or after this time are fetched. + :type dateFrom: str, optional + :param dateTo: The end time of the alarm in the ISO format. If specified, only alarms that are created on or before this time are fetched. + :type dateTo: str, optional + :param \**kwargs: All additional keyword arguments are treated as extra parameters for filtering alarms. + :return: List of alarms. + :rtype: list[object] + """ + queryParams = {} + if source: + queryParams['source'] = source + if type: + queryParams['type'] = type + if status: + queryParams['status'] = status + if dateFrom: + queryParams['dateFrom'] = dateFrom + if dateTo: + queryParams['dateTo'] = dateTo + if kwargs: + queryParams.update(kwargs) + + return self._getCumulocityObjectCollection('/alarm/alarms', queryParams=queryParams, responseKey='alarms')
    + +
    [docs] def getOperations(self, deviceId=None, fragmentType=None, dateFrom=None, dateTo=None, **kwargs): + """ + Gets all operations with matching parameters. + + For example:: + + self.getOperations(fragmentType='my_ops', dateFrom='2021-04-15 11:00:00.000Z', + dateTo='2021-04-15 11:30:00.000Z') + + :param deviceId: The device ID of the alarm. Get operations for all devices if not specified. + :type deviceId: str, optional + :param fragmentType: The type of fragment that must be part of the operation. + :type fragmentType: str, optional + :param dateFrom: The start time of the operation in the ISO format. If specified, only operations that are created on or after this time are fetched. + :type dateFrom: str, optional + :param dateTo: The end time of the operation in the ISO format. If specified, only operations that are created on or before this time are fetched. + :type dateTo: str, optional + :param \**kwargs: All additional keyword arguments are treated as extra parameters for filtering operations. + :return: List of operations. + :rtype: list[object] + """ + + queryParams = {} + if deviceId: + queryParams['deviceId'] = deviceId + if fragmentType: + queryParams['fragmentType'] = fragmentType + if dateFrom: + queryParams['dateFrom'] = dateFrom + if dateTo: + queryParams['dateTo'] = dateTo + if kwargs: + queryParams.update(kwargs) + + return self._getCumulocityObjectCollection('/devicecontrol/operations', queryParams=queryParams, responseKey='operations')
    + +
    [docs] def copyWithReplace(self, sourceFile, targetFile, replacementDict, marker='@'): + """ + Copies the source file to the target file and replaces the placeholder strings with the actual values. + + :param sourceFile: The path to the source file to copy. + :type sourceFile: str + :param targetFile: The path to the target file. + :type targetFile: str + :param replacementDict: A dictionary containing placeholder strings and their actual values to replace. + :type replacementDict: dict[str, str] + :param marker: Marker string used to surround replacement strings in the source file to disambiguate from normal strings. For example, `@`. + :type marker: str, optional + """ + def mapper(line): + for key, value in replacementDict.items(): + line = line.replace(f'{marker}{key}{marker}', str(value)) + return line + self.copy(sourceFile, targetFile, mappers=[mapper])
    + + def _getCumulocityObjectCollection(self, resourceUrl, queryParams, responseKey): + """ + Gets all Cumulocity IoT object collection. + + Fetches all pages of the collection. + + :param str resourceUrl: The base url of the object to get. For example, /alarm/alarms. + :param dict[str,str] queryParams: The query parameters. + :param str responseKey: The key to use to get actual object list from the response JSON. + :return: List of all object. + :rtype: list[object] + """ + result = [] + PAGE_SIZE = 100 # By default, pageSize = 5 for querying to C8y + queryParams = queryParams or {} + + def create_url(**params): + p = queryParams.copy() + p.update(params) + if '?' in resourceUrl: + return f'{resourceUrl}&{urllib.parse.urlencode(p)}' + else: + return f'{resourceUrl}?{urllib.parse.urlencode(p)}' + + resp = self.platform.getC8YConnection().do_get(create_url(pageSize=PAGE_SIZE, currentPage=1, withTotalPages=True)) + + result += resp[responseKey] + # Make sure we retrieve all pages from query + TOTAL_PAGES = resp['statistics']['totalPages'] + if TOTAL_PAGES > 1: + for currentPage in range(2, TOTAL_PAGES + 1): + resp = self.platform.getC8YConnection().do_get(create_url(pageSize=PAGE_SIZE, currentPage=currentPage)) + result += resp[responseKey] + + return result + def _clearActiveAlarms(self): """ Clears all active alarms as part of a pre-test tenant cleanup. @@ -222,28 +388,48 @@

    Source code for apamax.eplapplications.basetest

    < Deletes all ManagedObjects that have name prefixed with "PYSYS_" and the 'c8y_isDevice' param as part of pre-test tenant cleanup. """ self.log.info("Deleting old test devices") - # Retrieving test devices - PAGE_SIZE = 100 # By default, pageSize = 5 for querying to C8y - resp = self.platform.getC8YConnection().do_get( - "/inventory/managedObjects" + - f"?query=has(c8y_IsDevice)+and+name+eq+'{self.TEST_DEVICE_PREFIX}*'" + - f"&pageSize={PAGE_SIZE}&currentPage=1&withTotalPages=true") - testDevices = resp['managedObjects'] - # Make sure we retrieve all pages from query - TOTAL_PAGES = resp['statistics']['totalPages'] - if TOTAL_PAGES > 1: - for currentPage in range(2, TOTAL_PAGES + 1): - resp = self.platform.getC8YConnection().do_get( - "/inventory/managedObjects" + - f"?query=has(c8y_IsDevice)+and+name+eq+'{self.TEST_DEVICE_PREFIX}*'" + - f"&pageSize={PAGE_SIZE}&currentPage={currentPage}") - testDevices = testDevices + resp['managedObjects'] - + testDevices = self._getCumulocityObjectCollection(f"/inventory/managedObjects", + queryParams={'query':f"has(c8y_IsDevice) and name eq '{self.TEST_DEVICE_PREFIX}*'"}, + responseKey='managedObjects') # Deleting test devices testDeviceIds = [device['id'] for device in testDevices] for deviceId in testDeviceIds: - resp = self.platform.getC8YConnection().request('DELETE', f'/inventory/managedObjects/{deviceId}')
    + resp = self.platform.getC8YConnection().request('DELETE', f'/inventory/managedObjects/{deviceId}') + + def _deleteTestEPLApps(self): + """ + Deletes all EPL apps with name prefixed by "PYSYS_" or "PYSYS_TEST" + as part of a pre-test tenant cleanup. + """ + eplapps = EPLApps(self.platform.getC8YConnection()) + appsToDelete = [] + allApps = eplapps.getEPLApps(False) + for eplApp in allApps: + name = eplApp["name"] + if name.startswith(self.EPL_APP_PREFIX): + appsToDelete.append(name) + if len(appsToDelete) > 0: + self.log.info(f'Deleting the following EPL apps: {str(appsToDelete)}') + for name in appsToDelete: + eplapps.delete(name) +
    [docs] def getUTCTime(self, timestamp=None): + """ + Gets a Cumulocity IoT-compliant UTC timestamp string for the current time or the specified time. + + :param timestamp: The epoc timestamp to get timestamp string for. Use current time if not specified. + :type timestamp: float, optional + :return: Timestamp string. + :rtype: str + """ + if timestamp is not None: + t = datetime.utcfromtimestamp(timestamp) + else: + t = datetime.utcnow() + if t.microsecond == 0: + return t.isoformat() + '.000Z' + else: + return t.isoformat()[:-3] + 'Z'
    [docs]class LocalCorrelatorSimpleTest(ApamaC8YBaseTest): """ @@ -258,13 +444,11 @@

    Source code for apamax.eplapplications.basetest

    <
    [docs] def execute(self): """ Runs all the tests in the Input directory against the applications configured in the EPL_APPS - directory or with the EPLApps directive. + directory or with the `EPLApps` directive. """ - # Check APAMA_HOME and EPL_TESTING_SDK env are valid + # Check APAMA_HOME env is set if not os.path.isdir(self.project.APAMA_HOME): self.abort(BLOCKED, f'APAMA_HOME project property is not valid ({self.project.APAMA_HOME}). Try running in an Apama command prompt.') - if not os.path.isdir(self.project.EPL_TESTING_SDK): - self.abort(BLOCKED, f'EPL_TESTING_SDK is not valid ({self.project.EPL_TESTING_SDK}). Please set the EPL_TESTING_SDK environment variable.') from apama.correlator import CorrelatorHelper # Create test project and add C8Y properties and EPL Apps @@ -296,7 +480,7 @@

    Source code for apamax.eplapplications.basetest

    <
    [docs] def validate(self): """ - Checks than no errors were logged to the correlator log file. + Checks that no errors were logged to the correlator log file. """ # look for log statements in the correlator log file self.log.info("Checking for errors") @@ -304,7 +488,7 @@

    Source code for apamax.eplapplications.basetest

    <
    [docs] def addEPLAppsToProject(self, eplApps, project): """ - Adds the EPL app(s) being tested to a project.  + Adds the EPL app(s) being tested to a project. """ for eplApp in eplApps: try: @@ -316,7 +500,7 @@

    Source code for apamax.eplapplications.basetest

    <
    [docs] def getMonitorsFromInjectedFile(self, correlator, file): """ Retrieves a list of active monitors in a correlator, added from a particular mon file - using GET request to http://correlator.host:correlator.port + using a GET request to http://correlator.host:correlator.port. """ monitors = [] url = f'http://{correlator.host}:{correlator.port}' @@ -348,7 +532,7 @@

    Source code for apamax.eplapplications.basetest

    <
    [docs]class EPLAppsSimpleTest(ApamaC8YBaseTest): """ - Base test for running test with no run.py on EPL apps running in Cumulocity. + Base test for running test with no run.py on EPL apps running in Cumulocity IoT. """
    [docs] def setup(self): @@ -358,44 +542,24 @@

    Source code for apamax.eplapplications.basetest

    < self.apps = None self.eplapps = None self.addCleanupFunction(lambda: self.shutdown()) - self.EPL_APP_PREFIX = self.TEST_DEVICE_PREFIX self.EPL_TEST_APP_PREFIX = self.EPL_APP_PREFIX + "TEST_" - # Check EPL_TESTING_SDK env is set - if not os.path.isdir(self.project.EPL_TESTING_SDK): - self.abort(BLOCKED, f'EPL_TESTING_SDK is not valid ({self.project.EPL_TESTING_SDK}). Please set the EPL_TESTING_SDK environment variable.') + self.eplapps = EPLApps(self.platform.getC8YConnection()) self.prepareTenant()
    [docs] def prepareTenant(self): """ - Prepares the tenant for a test by deleting all devices created by previous tests, deleting all EPL Apps which have been uploaded by tests, and clearing all active alarms. + Prepares the tenant for a test by deleting all devices created by previous tests, deleting all EPL apps which have been uploaded by tests, and clearing all active alarms. - This is done first so that there's no possibility existing test apps raising alarms or creating devices + This is done first so that it is not possible for existing test apps to raise alarms or create devices. """ self._deleteTestEPLApps() super(EPLAppsSimpleTest, self).prepareTenant()
    - - def _deleteTestEPLApps(self): - """ - Deletes all EPL apps with name prefixed by "PYSYS_" or "PYSYS_TEST" - as part of a pre-test tenant cleanup. - """ - appsToDelete = [] - allApps = self.eplapps.getEPLApps(False) - for eplApp in allApps: - name = eplApp["name"] - if name.startswith(self.EPL_APP_PREFIX): - appsToDelete.append(name) - if len(appsToDelete) > 0: - self.log.info(f'Deleting the following EPL apps: {str(appsToDelete)}') - for name in appsToDelete: - self.eplapps.delete(name) -
    [docs] def execute(self): """ Runs all the tests in the Input directory against the applications configured in the EPL_APPS - directory or with the EPLApps directive using EPL applications to run each test. + directory or with the `EPLApps` directive using EPL apps to run each test. """ # EPL Applications under test appPaths = self.getTestSubjectEPLApps() @@ -426,14 +590,14 @@

    Source code for apamax.eplapplications.basetest

    <
    [docs] def validate(self): """ - Ensure that no errors were logged in the platform log file while we were running the test. + Ensures that no errors were logged in the platform log file while we were running the test. """ self.log.info("Checking for errors") self.assertGrep(self.platform.getApamaLogFile(), expr=' (ERROR|FATAL) .*', contains=False)
    [docs] def shutdown(self): """ - Deactivate all EPL apps which were uploaded when the test terminates. + Deactivates all uploaded EPL apps when the test terminates. """ self.log.info("Deactivating EPL apps") # when we finish, deactivate anything we started @@ -455,7 +619,8 @@

    Table of Contents

  • Using the eplapp.py command line tool
  • Writing tests for EPL apps
  • Using PySys to test your EPL apps
  • -
  • PySys helpers for Apama EPL Apps
  • +
  • Testing the performance of your EPL apps
  • +
  • PySys helpers for EPL apps
  • diff --git a/doc/pydoc/_modules/apamax/eplapplications/connection.html b/doc/pydoc/_modules/apamax/eplapplications/connection.html index 98c9dac..2aac45f 100644 --- a/doc/pydoc/_modules/apamax/eplapplications/connection.html +++ b/doc/pydoc/_modules/apamax/eplapplications/connection.html @@ -4,7 +4,7 @@ - apamax.eplapplications.connection — EPL Apps Tools 10.9.0.1 documentation + apamax.eplapplications.connection — EPL Apps Tools 10.9.0.6 documentation @@ -26,7 +26,7 @@

    Navigation

  • modules |
  • - +
    @@ -53,7 +53,7 @@

    Source code for apamax.eplapplications.connection

    [docs]class C8yConnection(object): """ - Simple object to create connection to Cumulocity and perform REST requests. + Simple object to create connection to Cumulocity IoT and perform REST requests. """ def __init__(self, url, username, password): @@ -152,7 +152,8 @@

    Table of Contents

  • Using the eplapp.py command line tool
  • Writing tests for EPL apps
  • Using PySys to test your EPL apps
  • -
  • PySys helpers for Apama EPL Apps
  • +
  • Testing the performance of your EPL apps
  • +
  • PySys helpers for EPL apps
  • diff --git a/doc/pydoc/_modules/apamax/eplapplications/eplapps.html b/doc/pydoc/_modules/apamax/eplapplications/eplapps.html index 7843593..aeaac38 100644 --- a/doc/pydoc/_modules/apamax/eplapplications/eplapps.html +++ b/doc/pydoc/_modules/apamax/eplapplications/eplapps.html @@ -4,7 +4,7 @@ - apamax.eplapplications.eplapps — EPL Apps Tools 10.9.0.1 documentation + apamax.eplapplications.eplapps — EPL Apps Tools 10.9.0.6 documentation @@ -26,7 +26,7 @@

    Navigation

  • modules |
  • - +
    @@ -56,7 +56,7 @@

    Source code for apamax.eplapplications.eplapps

    [docs]class EPLApps: - """Class for interacting with Apama EPL Apps in Cumulocity + """Class for interacting with Apama EPL Apps in Cumulocity IoT. :param connection: A C8yConnection object for the connection to the platform. """ @@ -66,11 +66,11 @@

    Source code for apamax.eplapplications.eplapps

    [docs] def deploy(self, file, name='', description=None, inactive=False, redeploy=False): """ - Deploys a local mon file to Apama EPL Apps in Cumulocity. + Deploys a local mon file to Apama EPL Apps in Cumulocity IoT. - :param file: Path to local mon file to be deployed as an EPL app + :param file: Path to local mon file to be deployed as an EPL app. :param name: Name of the EPL app to be uploaded (optional). By default this will be the name of the mon file being uploaded. - :param description: Description of the EPL app (optional) + :param description: Description of the EPL app (optional). :param inactive: Boolean of whether the app should be 'active' (inactive=False) or 'inactive' (inactive=True) when it is deployed. :param redeploy: Boolean of whether we are overwriting an existing EPL app. """ @@ -129,7 +129,7 @@

    Source code for apamax.eplapplications.eplapps

    [docs] def update(self, name, new_name=None, file=None, description=None, state=None): """ - Updates an EPL app in Cumulocity. + Updates an EPL app in Cumulocity IoT. :param name: name of the EPL App to be updated. :param new_name: the updated name of the EPL app (optional) @@ -199,7 +199,7 @@

    Source code for apamax.eplapplications.eplapps

    [docs] def getEPLApps(self, includeContents=False): """ :param includeContents: Fetches the EPL files with their contents if True. This is an optional query parameter. - :return: A json object of all the user's EPL apps in Cumulocity + :return: A json object of all the user's EPL apps in Cumulocity IoT. """ try: return self.connection.do_get(f'/service/cep/eplfiles?contents={includeContents}')['eplfiles'] @@ -208,7 +208,7 @@

    Source code for apamax.eplapplications.eplapps

    [docs] def delete(self, name: str): """ - Deletes an EPL app in Cumulocity. + Deletes an EPL app in Cumulocity IoT. :param name: The name of the EPL app to be deleted. """ @@ -247,7 +247,8 @@

    Table of Contents

  • Using the eplapp.py command line tool
  • Writing tests for EPL apps
  • Using PySys to test your EPL apps
  • -
  • PySys helpers for Apama EPL Apps
  • +
  • Testing the performance of your EPL apps
  • +
  • PySys helpers for EPL apps
  • diff --git a/doc/pydoc/_modules/apamax/eplapplications/perf/basetest.html b/doc/pydoc/_modules/apamax/eplapplications/perf/basetest.html new file mode 100644 index 0000000..4dadd76 --- /dev/null +++ b/doc/pydoc/_modules/apamax/eplapplications/perf/basetest.html @@ -0,0 +1,930 @@ + + + + + + + apamax.eplapplications.perf.basetest — EPL Apps Tools 10.9.0.6 documentation + + + + + + + + + + + + + + +
    +
    +
    +
    + +

    Source code for apamax.eplapplications.perf.basetest

    +## License
    +# Copyright (c) 2020 Software AG, Darmstadt, Germany and/or its licensors
    +
    +# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this
    +# file except in compliance with the License. You may obtain a copy of the License at
    +# http://www.apache.org/licenses/LICENSE-2.0
    +# Unless required by applicable law or agreed to in writing, software distributed under the
    +# License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
    +# either express or implied.
    +# See the License for the specific language governing permissions and limitations under the License.
    +
    +from pysys.constants import *
    +import json
    +import urllib
    +import sys, os, time, pathlib, glob
    +import csv
    +import math, statistics
    +from pysys.utils.linecount import linecount
    +from apamax.eplapplications.basetest import ApamaC8YBaseTest
    +from apamax.eplapplications.eplapps import EPLApps
    +
    +# constants for performance metrics strings.
    +PERF_TIMESTAMP = 'timestamp'
    +PERF_TOTAL_MEMORY_USAGE = 'total_memory_usage'
    +PERF_MEMORY_CORR = 'memory_usage_corr'
    +PERF_MEMORY_APCTRL = 'memory_usage_apctrl'
    +PERF_CORR_IQ_SIZE = 'correlator_iq_size'
    +PERF_CORR_OQ_SIZE = 'correlator_oq_size'
    +PERF_CORR_SPAW_RATE = 'correlator_swap_read_write'
    +PERF_CORR_NUM_OUTPUT_SENT = 'correlator_num_output_sent'
    +PERF_CORR_NUM_INPUT_RECEIVED = 'correlator_num_input_received'
    +PERF_CEP_PROXY_REQ_STARTED = 'cep_proxy_requests_started'
    +PERF_CEP_PROXY_REQ_COMPLETED = 'cep_proxy_requests_completed'
    +PERF_CEP_PROXY_REQ_FAILED = 'cep_proxy_requests_failed'
    +PERF_CPU_USAGE_MILLI = 'cpu_usage_milli'
    +
    +# Description of metrics
    +METRICS_DESCRIPTION = {
    +	PERF_TOTAL_MEMORY_USAGE: 'Total Memory Usage (MB)',
    +	PERF_MEMORY_CORR: 'Correlator Memory Usage (MB)',
    +	PERF_MEMORY_APCTRL: 'Apama-ctrl Memory Usage (MB)',
    +	PERF_CORR_IQ_SIZE: 'Correlator Input Queue Size',
    +	PERF_CORR_OQ_SIZE: 'Correlator Output Queue Size',
    +	PERF_CORR_SPAW_RATE: 'Correlator Swapping Rate',
    +	PERF_CORR_NUM_INPUT_RECEIVED: 'Number of Inputs Received',
    +	PERF_CORR_NUM_OUTPUT_SENT: 'Number of Outputs Sent',
    +	PERF_CPU_USAGE_MILLI: 'CPU Usage (millicpu)',
    +	PERF_CEP_PROXY_REQ_STARTED: 'CEP Requests Started',
    +	PERF_CEP_PROXY_REQ_COMPLETED: 'CEP Requests Completed',
    +	PERF_CEP_PROXY_REQ_FAILED: 'CEP Requests Failed',
    +}
    +
    +# constants for output files
    +OUTFILE_PERF_RAW_DATA = 'perf_raw_data'
    +OUTFILE_PERF_CPU_USAGE = 'perf_cpuusage'
    +OUTFILE_PERF_STATS = 'perf_statistics'
    +OUTFILE_PERF_COUNTERS = 'perf_counters'
    +OUTFILE_ENV_DETAILS = 'env_details'
    +
    +
    [docs]class ObjectCreator: + """ + Base class for object creator implementation. + """ +
    [docs] def createObject(self, device, time): + """ + Creates an object to publish. + + :param str device: The ID of the device to create an object for. + :param str time: The source time to use for the object. + :return: A new object instance to publish. + + For example:: + + # Create and return a measurement object + return { + 'time': time, + "type": 'my_measurement', + "source": { "id": device }, + 'my_fragment': { + 'my_series': { + "value": random.uniform(0, 100) + } + } + } + """ + raise Exception('Not Implemeted')
    + +
    [docs]class EPLAppsPerfTest(ApamaC8YBaseTest): + """ + Base class for EPL applications performance tests. + + Requires the following to be set on the project in the pysysproject.xml file (typically from the environment): + + - EPL_TESTING_SDK + + :ivar eplapps: The object for deploying and un-deploying EPL apps. + """ + +
    [docs] def setup(self): + super(EPLAppsPerfTest, self).setup() + self.addCleanupFunction(lambda: self._shutdown()) + self.eplapps = EPLApps(self.platform.getC8YConnection()) + + self.perfMonitorThread = None # Current performance monitoring thread + self.perfMonitorCount = 0 # Number of time performance monitoring is started + self.simulators = [] # All simulators
    + +
    [docs] def prepareTenant(self, restartMicroservice=False): + """ + Prepares the tenant for a performance test by deleting all devices created by previous tests, deleting all EPL test applications, and clearing all active alarms. + + This must be called by the test before the application is deployed. + + :param bool restartMicroservice: Restart Apama-ctrl microservice. + """ + self.log.info('Preparing tenant to run performance test') + + # delete existing EPL test apps + self._deleteTestEPLApps() + + # Clear all active alarms + self._clearActiveAlarms() + + # Delete devices that were created by tests + self._deleteTestDevices() + + # stop monitoring thread + if self.perfMonitorThread: + self.perfMonitorThread.stop() + self.perfMonitorThread.join() + + # stop any running simulators + for s in self.simulators: + s.stop() + self.simulators = [] + + if restartMicroservice: + self._restartApamaMicroserviceImpl()
    + + def _shutdown(self): + """ + Performs common cleanup during test shutdown, like stopping performance monitoring thread, deactivating EPL test apps, and generating final HTML report. + """ + if self.perfMonitorThread: + self.perfMonitorThread.stop() + self.perfMonitorThread.join() + self._deactivateTestEPLApps() + self._generateFinalHTMLReport() + +
    [docs] def restartApamaMicroservice(self): + """ + Restarts Apama-ctrl microservice and waits for it to come back up. + """ + self._restartApamaMicroserviceImpl()
    + + def _restartApamaMicroserviceImpl(self): + """ Restarts Apama-ctrl microservice and wait for it to come back up. """ + + self.log.info('Restarting Apama-ctrl microservice') + count1 = linecount(self.platform.getApamaLogFile(), 'Microservice restart Microservice .* is being restarted') + count2 = linecount(self.platform.getApamaLogFile(), 'httpServer-instance.*Started receiving messages') + try: + self.platform.getC8YConnection().do_request_json('PUT', '/service/cep/restart', {}) + self.log.info('Restart requested') + except (urllib.error.HTTPError, urllib.error.URLError) as ex: + statuscode = int(ex.code) + if statuscode // 10 == 50: + self.log.info('Restart requested') + else: + raise Exception(f'Failed to restart Apama-ctrl: {ex}') + except Exception as ex: + raise Exception(f'Failed to restart Apama-ctrl: {ex}') + self.waitForGrep('platform.log', expr='Microservice restart Microservice .* is being restarted', condition=f'>={count1+1}') + self.waitForGrep('platform.log', expr='httpServer-instance.*Started receiving messages', condition=f'>={count2+1}', timeout=TIMEOUTS['WaitForProcess']) + self.log.info('Apama-ctrl microservice is successfully restarted') + + def _deactivateTestEPLApps(self): + """ + Deactivates all EPL test apps. + """ + eplapps = self.eplapps.getEPLApps(False) or [] + for app in eplapps: + name = app["name"] + if name.startswith(self.EPL_APP_PREFIX): + try: + self.eplapps.update(name, state='inactive') + except Exception as e: + self.log.info(f"Failed to deactivate app {name}: {e}") + + +
    [docs] def startMeasurementSimulator(self, devices, perDeviceRate, creatorFile, creatorClassName, creatorParams, duration=None, processingMode='CEP'): + """ + Starts a measurement simulator process to publish simulated measurements to Cumulocity IoT. + + The simulator uses an instance of the provided measurement creator class to create new measurements to send, + allowing the test to publish measurements of different types and sizes. + The simulator looks up the specified class in the specified Python file and creates a new instance of the class + using the provided parameters. The measurement creator class must extend the :class:`apamax.eplapplications.perf.basetest.ObjectCreator` class. + + :param list[str] devices: List of device IDs to generate measurements for. + :param float perDeviceRate: The rate of measurements to publish per device. + :param str creatorFile: The path to the Python file containing the measurement creator class. + :param str creatorClassName: The name of the measurement creator class that extends the :class:`apamax.eplapplications.perf.basetest.ObjectCreator` class. + :param list creatorParams: The list of parameters to pass to the constructor of the measurement creator class to create a new instance. + :param str processingMode: Cumulocity IoT processing mode. Possible values are CEP, PERSISTENT, TRANSIENT, and QUIESCENT. + :param duration: The duration (in seconds) to run the simulator for. If no duration is specified, then the simulator runs until either stopped or the end of the test. + :type duration: float, optional + :return: The process handle of the simulator process. + :rtype: pysys.process.Process + + For example:: + + # In a 'creator.py' file in test input directory. + class MyMeasurementCreator(ObjectCreator): + def __init__(lowerBound, upperBound): + self.lowerBound = lowerBound + self.upperBound = upperBound + def createObject(self, device, time): + return { + 'time': time, + "type": 'my_measurement', + "source": { "id": device }, + 'my_fragment': { + 'my_series': { + "value": random.uniform(self.lowerBound, self.upperBound) + } + } + } + + ... + + # In the test + self.startMeasurementSimulator( + ['12345'], # device IDs + 1, # rate of measurements to publish + f'{self.input}/creator.py', # Python file path + 'MyMeasurementCreator', # class name + [10, 50], # constructor parameters for MyMeasurementCreator class + ) + """ + return self._startPublisher(devices, perDeviceRate, '/measurement/measurements', creatorFile, creatorClassName, creatorParams, duration, processingMode)
    + + def _startPublisher(self, devices, perDeviceRate, resourceUrl, creatorFile, creatorClassName, creatorParams, duration=None, processingMode='CEP'): + """ + Starts a publisher process to publish simulated data to Cumulocity IoT using provided object creator class. + + :param list[str] devices: List of device IDs. + :param float perDeviceRate: The rate of objects to publish per device. + :param str resourceUrl: The resource url, for example /measurement/measurements. + :param str creatorFile: The path to the Python file containing object creator class. + :param str creatorClassName: The name of the object creator class. + :param list creatorParams: The list of parameters to pass to constructor of the object creator class. + :param str processingMode: Cumulocity IoT processing mode. Possible values are CEP, PERSISTENT, TRANSIENT and QUIESCENT. + :param duration: The duration (in seconds) to run simulators for. If no duration specified then it runs until either stopped or end of the test. + :type duration: float, optional + :return: The publisher object which can be stopped by calling stop() method on it. + :rtype: pysys.process.Process + """ + object_creator_info = { + 'className': creatorClassName, + 'constructorParams': creatorParams, + 'file': creatorFile + } + env = self.getDefaultEnvirons(command=sys.executable) + test_framework_root=f'{self.project.EPL_TESTING_SDK}/testframework' + env['PYTHONPATH'] = f'{test_framework_root}{os.pathsep}{env.get("PYTHONPATH", "")}' + env['PYTHONDONTWRITEBYTECODE'] = 'true' + + (url, tanant_id, username, password) = self.platform.getC8yConnectionDetails() + script_path = str(pathlib.Path(__file__).parent.joinpath('publisher.py')) + arguments = [script_path, + '--base_url', url, + '--username', username, + '--password', password, + '--devices', json.dumps(devices), + '--per_device_rate', str(perDeviceRate), + '--resource_url', resourceUrl, + '--processing_mode', processingMode, + '--object_creator_info', json.dumps(object_creator_info), + ] + + if duration is not None: + arguments.extend(['--duration', str(duration)]) + + self.mkdir(f'{self.output}/simulators') + stdouterr=self.allocateUniqueStdOutErr('simulators/measurementpublisher') + p = self.startPython(arguments, stdouterr=stdouterr, disableCoverage=True, environs=env, background=True) + self.simulators.append(p) + self.waitForGrep(stdouterr[0], expr='Started publishing Cumulocity', errorExpr=['ERROR ', 'DataPublisher failed']) + return p + + def _perfMonitorSuffix(self, noSuffixForFirst=True): + """ + Gets suffix to add to generated files. + + :param noSuffixForFirst: Do not generate suffix if first perf monitoring thread is running, defaults to True + :type noSuffixForFirst: bool, optional + :return: The suffix string. + :rtype: str + """ + if self.perfMonitorCount > 1 or not noSuffixForFirst: + return '.' + str(self.perfMonitorCount - 1).rjust(2, '0') + else: + return '' + + def _getEnvironmentDetails(self): + """ + Gets environment details in which test is running. + + Used for HTML report. + """ + paq_version = self.platform.getC8YConnection().do_get('/service/cep/diagnostics/componentVersion').get('componentVersion', '<unknown>') + apamaCtrlStatus = self.platform.getC8YConnection().do_get('/service/cep/diagnostics/apamaCtrlStatus') + microservice_name = apamaCtrlStatus.get('microservice_name', '<unknown>') + uptime = apamaCtrlStatus.get('uptime_secs', '<unknown>') + c8y_url = self.platform.getC8YConnection().base_url + c8y_version = self.platform.getC8YConnection().do_get('/tenant/system/options/system/version').get('value', '<unknown>') + pam_version = self.platform.getC8YConnection().do_get('/service/cep/diagnostics/info', headers={'Accept':'application/json'}).get('productVersion', '<unknow>') + app_manifest = self.platform.getC8YConnection().do_get(f'/application/applicationsByName/{microservice_name}') + microservice_resources = app_manifest.get('applications', [{}])[0]['manifest']['resources'] + cpu_limit = microservice_resources['cpu'] + if cpu_limit == '1': + cpu_limit += ' core' + else: + cpu_limit += ' cores' + + return { + 'Cumulocity IoT Tenant': c8y_url, + 'Cumulocity IoT Version': c8y_version, + 'Microservice name': microservice_name, + 'Microservice CPU Limit': cpu_limit, + 'Microservice Memory Limit': microservice_resources['memory'], + 'Apama Version': f'PAM {pam_version}, PAQ {paq_version}', + 'Uptime (secs)': uptime + } + +
    [docs] def startPerformanceMonitoring(self, pollingInterval=2): + """ + Starts a performance monitoring thread that periodically gathers and logs various metrics and publishes + performance statistics at the end. + + :param pollingInterval: The polling interval to get performance data. Defaults to 2 seconds. + :type pollingInterval: float, optional + :return: The background thread. + :rtype: L{pysys.utils.threadutils.BackgroundThread} + """ + if self.perfMonitorThread and self.perfMonitorThread.is_alive(): + self.perfMonitorThread.stop() + self.perfMonitorThread.join() + self.perfMonitorCount += 1 + if not os.path.exists(f'{self.output}/{OUTFILE_ENV_DETAILS}.json'): + self.write_text(f'{self.output}/{OUTFILE_ENV_DETAILS}.json', json.dumps(self._getEnvironmentDetails(), indent=2), encoding='utf8') + self.perfMonitorThread = self.startBackgroundThread("perf_monitoring_thread", self._monitorPerformance, {'pollingInterval':pollingInterval}) + return self.perfMonitorThread
    + + def _monitorPerformance(self, stopping, log, pollingInterval): + """ + Implements performance gathering thread. + + :param stopping: To check if thread should be stopped. + :param log: The logger. + :param pollingInterval: The polling interval to get performance data. + """ + log.info('Started gathering performance metrics') + cpu_monitoring_thread = self.startBackgroundThread("cpu_monitoring_thread", self._monitor_cpu_usage_impl) + suffix = self._perfMonitorSuffix() + def num(value): + try: + f = float(value) + return f if not math.isnan(f) else 0 + except Exception: + return 0 + try: + fieldnames = [PERF_TIMESTAMP, PERF_TOTAL_MEMORY_USAGE, PERF_MEMORY_CORR, PERF_MEMORY_APCTRL, + PERF_CORR_IQ_SIZE, PERF_CORR_OQ_SIZE, PERF_CORR_SPAW_RATE, + PERF_CORR_NUM_OUTPUT_SENT, PERF_CORR_NUM_INPUT_RECEIVED, PERF_CEP_PROXY_REQ_STARTED, + PERF_CEP_PROXY_REQ_COMPLETED, PERF_CEP_PROXY_REQ_FAILED + ] + csv_file = open(f'{self.output}/{OUTFILE_PERF_RAW_DATA}{suffix}.csv', 'w', encoding='utf8') + writer = csv.DictWriter(csv_file, fieldnames=fieldnames) + writer.writeheader() + while not stopping.is_set(): + data = {} + # gather performance data + # 1) get correlator status + corr_status = self.platform.getC8YConnection().do_get('/service/cep/diagnostics/correlator/status', headers={'Accept':'application/json'}) + + # 2) get apama-ctrl status + apctrl_status = self.platform.getC8YConnection().do_get('/service/cep/diagnostics/apamaCtrlStatus') + + # write data + data[PERF_TIMESTAMP] = time.time() + data[PERF_MEMORY_CORR] = num(corr_status.get('physicalMemoryMB', 0)) + data[PERF_MEMORY_APCTRL] = num(apctrl_status.get('apama_ctrl_physical_mb', 0)) + data[PERF_TOTAL_MEMORY_USAGE] = data[PERF_MEMORY_CORR] + data[PERF_MEMORY_APCTRL] + data[PERF_CORR_IQ_SIZE] = num(corr_status.get('numQueuedInput', 0)) # numInputQueuedInput + data[PERF_CORR_OQ_SIZE] = num(corr_status.get('numOutEventsQueued', 0)) + data[PERF_CORR_NUM_OUTPUT_SENT] = num(corr_status.get('numOutEventsSent', 0)) + data[PERF_CORR_NUM_INPUT_RECEIVED] = num(corr_status.get('numReceived', 0)) + data[PERF_CORR_SPAW_RATE] = num(corr_status.get('swapPagesRead', 0)) + num(corr_status.get('swapPagesWrite', 0)) + + cep_proxy_requests_started = 0 + cep_proxy_requests_completed = 0 + cep_proxy_requests_failed = 0 + cep_proxy_request_counts = apctrl_status.get('cep_proxy_request_counts', {}) + for key in cep_proxy_request_counts.keys(): + cep_proxy_requests_started += num(cep_proxy_request_counts[key].get('requestsStarted', 0)) + cep_proxy_requests_completed += num(cep_proxy_request_counts[key].get('requestsCompleted', 0)) + cep_proxy_requests_failed += num(cep_proxy_request_counts[key].get('requestsFailed', 0)) + data[PERF_CEP_PROXY_REQ_STARTED] = cep_proxy_requests_started + data[PERF_CEP_PROXY_REQ_COMPLETED] = cep_proxy_requests_completed + data[PERF_CEP_PROXY_REQ_FAILED] = cep_proxy_requests_failed + + writer.writerow(data) + csv_file.flush() + if not stopping.is_set(): + time.sleep(pollingInterval) + except Exception as ex: + log.error(f'Exception while gathering performance data: {ex}') + raise Exception(f'Exception while gathering performance data: {ex}').with_traceback(ex.__traceback__) + finally: + if csv_file: csv_file.close() + cpu_monitoring_thread.stop() + cpu_monitoring_thread.join() + self._generatePerfStatistics() + log.info('Finished performance monitoring') + + def _monitor_cpu_usage_impl(self, stopping, log): + """ + Implements CPU usage monitoring thread. + + :param stopping: To check if thread should be stopped. + :param log: The logger. + """ + + url = '/service/cep/diagnostics/cpuUsageMillicores' + + # check if able to monitor CPU usage (REST url exposed + able to calculate CPU usage) + try: + cpu_usage = float(self.platform.getC8YConnection().do_get(url + '?sampleDurationMSec=10')) + except Exception as ex: + log.info("Unable to monitor CPU usage") + return + + log.info('Started gathering CPU usage') + url = url + '?sampleDurationMSec=2000' + suffix = self._perfMonitorSuffix() + with open(f'{self.output}/{OUTFILE_PERF_CPU_USAGE}{suffix}.csv', 'w', encoding='utf8') as csv_file: + writer = csv.DictWriter(csv_file, fieldnames=['timestamp', PERF_CPU_USAGE_MILLI]) + writer.writeheader() + while not stopping.is_set(): + try: + cpu_usage = self.platform.getC8YConnection().do_get(url) + writer.writerow({ + 'timestamp': time.time(), + PERF_CPU_USAGE_MILLI: cpu_usage + }) + csv_file.flush() + except Exception as e: + log.error("Unable to get cpu usage: " + str(e)) + + def _generatePerfStatistics(self): + """ + Generates performance statistics. + """ + suffix = self._perfMonitorSuffix() + # read data points + datapoints = {} + with open(f'{self.output}/{OUTFILE_PERF_RAW_DATA}{suffix}.csv', 'r', encoding='utf8') as csv_file: + csv_reader = csv.DictReader(csv_file) + for row in csv_reader: + for metric_name in [PERF_TOTAL_MEMORY_USAGE, PERF_MEMORY_CORR, PERF_MEMORY_APCTRL, + PERF_CORR_IQ_SIZE, PERF_CORR_OQ_SIZE, PERF_CORR_SPAW_RATE, + PERF_CORR_NUM_OUTPUT_SENT, PERF_CORR_NUM_INPUT_RECEIVED, PERF_CEP_PROXY_REQ_STARTED, + PERF_CEP_PROXY_REQ_COMPLETED, PERF_CEP_PROXY_REQ_FAILED]: + datapoints.setdefault(metric_name, []) + datapoints[metric_name].append(float(row[metric_name])) + + cpu_perf_file = f'{self.output}/{OUTFILE_PERF_CPU_USAGE}{suffix}.csv' + if os.path.exists(cpu_perf_file): + metric_name = PERF_CPU_USAGE_MILLI + datapoints.setdefault(metric_name, []) + with open(cpu_perf_file, 'r', encoding='utf8') as csv_file: + csv_reader = csv.DictReader(csv_file) + for row in csv_reader: + datapoints[metric_name].append(float(row[metric_name])) + + # Extract counter like values to a separate object. We capture difference between first and last value only for these. + counter_values = {} + for name in [PERF_CORR_NUM_INPUT_RECEIVED, PERF_CORR_NUM_OUTPUT_SENT, PERF_CEP_PROXY_REQ_STARTED, + PERF_CEP_PROXY_REQ_COMPLETED, PERF_CEP_PROXY_REQ_FAILED]: + values = datapoints[name] + counter_values[name] = int(values[-1]) - int(values[0]) + del datapoints[name] + + self.write_text(f'{OUTFILE_PERF_COUNTERS}{suffix}.json', json.dumps(counter_values, indent=2), encoding='utf8') + self.write_text(f'{OUTFILE_PERF_RAW_DATA}{suffix}.json', json.dumps(datapoints, indent=2), encoding='utf8') + + def percentile(data, percent): # calculate percentile + size = len(data) + return sorted(data)[int(math.ceil((size * percent) / 100)) - 1] + + # calculate statistics + stats = {} + for name in datapoints.keys(): + values = datapoints[name] + stats[name] = {} + stats[name]['min'] = min(values) + stats[name]['max'] = max(values) + stats[name]['mean'] = statistics.mean(values) + stats[name]['median'] = statistics.median(values) + stats[name]['75th_percentile'] = percentile(values, 75) + stats[name]['90th_percentile'] = percentile(values, 90) + stats[name]['95th_percentile'] = percentile(values, 95) + stats[name]['99th_percentile'] = percentile(values, 99) + + self.write_text(f'{OUTFILE_PERF_STATS}{suffix}.json', json.dumps(stats, indent=2), encoding='utf8') + + with open(f'{self.output}/{OUTFILE_PERF_STATS}{suffix}.csv', 'w', encoding='utf8') as csv_file: + columns = ['name'] + list(stats[PERF_TOTAL_MEMORY_USAGE].keys()) + writer = csv.DictWriter(csv_file, fieldnames=columns) + writer.writeheader() + + for name in stats.keys(): + row = {'name':name} + for col in columns[1:]: + row[col] = stats[name][col] + writer.writerow(row) + +
    [docs] def read_json(self, fileName, fileDirectory=None): + """ + Reads a JSON file and returns its content. + + :param fileName: Name of the file. + :type fileName: str + :param fileDirectory: Directory of the file. Use test output directory if not specified. + :type fileDirectory: str, optional + :return: The decoded content of the file. + """ + fileDirectory = fileDirectory or self.output + return json.loads(pathlib.Path(fileDirectory).joinpath(fileName).read_text(encoding='utf8'))
    + + def _confirmStableQueueSize(self, queue_name, perf_data, noise_floor=200, ratio_threshold=0.5, discard_fraction=0.2): + """ + Checks that the queue sizes are stable or coming down. Logs warning if queue sizes are still increasing. + + :param: queue_name: The name of the correlator queue. + :param: perf_data: The raw performance data. + :param: noise_floor: Skip analysis if difference between queue sizes at end compared to start is less than this. + :param: ratio_threshold: Log error if ration of slopes of graph is second half versus first half is more than this. + :param: discard_fraction: The amount of data to discard at the beginning and the end before performing analysis. + """ + values = perf_data['correlator_iq_size' if queue_name == 'input' else 'correlator_oq_size'] + discard_size = int(len(values)*discard_fraction) + values = values[discard_size:-discard_size] + if len(values) <= 5: + self.log.warn('Not enough datapoints to analyse queue growth') + return + + # Check that queue size graph flattens off, or at least starts to flatten off + # calculate the segment averages + left = values[0 : int(len(values) * 0.35)] + left = float(sum(left)) / len(left) + mid = values[int(len(values) * 0.35) : int(len(values) * 0.65)] + mid = float(sum(mid)) / len(mid) + right = values[int(len(values) * 0.65) : int(len(values) * 0.95)] + right = float(sum(right)) / len(right) + + # Easy case - it's going downwards for a large part of the graph, or the change is beneath the noise floor + if right - left <= noise_floor or right - mid <= noise_floor: + return + if mid - left == 0 and right - mid == 0: + return + try: + slope_ratio = (right - mid) / (mid - left) + except ZeroDivisionError: + slope_ratio = float("inf") + + # if slope is not coming down fast enough then most probably queue is going to get full + mean_size_towards_end = statistics.mean(values[int(len(values) * 0.85) : int(len(values) * 0.95)]) + if slope_ratio > ratio_threshold: + self.log.error(f'Correlator {queue_name} queue was increasing continuously. It probably would have been full eventually. Mean queue size towards the end was {mean_size_towards_end}') + elif slope_ratio > 0.2: + self.log.warn(f'Correlator {queue_name} queue was increasing slowly. It probably would have been full eventually. Mean queue size towards the end was {mean_size_towards_end}') + # test is not failed because it might be false positive + + def _to_html_list(self, values): + """ + Generates a HTML list to be embedded in an HTML page from provided values. + """ + if values is None: return '' + result = '' + if isinstance(values, dict): + for key, value in values.items(): + result += f'<li><span class="key">{key}: </span>{value}</li>' + elif isinstance(values, list): + for value in values: + result += f'<li>{value}</li>' + else: + result += f'<li>{str(values)}</li>' + return f'<ul>{result}</ul>' + + def _dict_to_html_table(self, values, column_names): + """ + Generates a HTML table to be embedded in an HTML page from provided dictionary. + :param: values: Dictionary of values. + :param: column_names: Name of the columns to use. + """ + result = '<tr><td>&nbsp;</td>' + for key in column_names: + result += f'<th scope="col">{key}</th>' + result += f'</tr>' + + def format_value(val): + if isinstance(val, float): return f'{val:0.2f}' + else: return str(val) + + for metric_name, row in values.items(): + result += f'<tr><th scope="row">{metric_name}</th>' + if isinstance(row, dict): + for c in column_names: + val = row.get(c, '') + result += f'<td>{format_value(val)}</td>' + else: + result += f'<td>{format_value(row)}</td>' + result += '</tr>' + + return f'<table>{result}</table>' + +
    [docs] def generateHTMLReport(self, description, testConfigurationDetails=None, extraPerformanceMetrics=None): + """ + Generates an HTML report of the performance result. The report is generated at the end of the test. + + When testing for multiple variations, multiple HTML reports are combined into a single HTML report. + + :param description: A brief description of the test. + :type description: str + :param testConfigurationDetails: Details of the test configuration to include in the report. + :type testConfigurationDetails: dict, list, str, optional + :param extraPerformanceMetrics: Extra application-specific performance metrics to include in the report. + :type extraPerformanceMetrics: dict, list, str, optional + """ + if self.perfMonitorThread.is_alive(): + self.perfMonitorThread.stop() + self.perfMonitorThread.join() + + suffix = self._perfMonitorSuffix() + + if not os.path.exists(f'{self.output}/{OUTFILE_PERF_STATS}{suffix}.json'): + self._generatePerfStatistics() + + # method to generate textual representation of a time range + def format_time_range(timestamp1, timestamp2): + import datetime + datetime1 = datetime.datetime.fromtimestamp(timestamp1, datetime.timezone.utc) + datetime2 = datetime.datetime.fromtimestamp(timestamp2, datetime.timezone.utc) + + format1 = datetime1.strftime('%a %Y-%m-%d %H:%M:%S') + ' UTC' + if datetime1.date()==datetime2.date(): + format2 = datetime2.strftime('%H:%M:%S') + else: + format2 = datetime2.strftime('%a %Y-%m-%d %H:%M:%S') + ' UTC' + + delta = datetime2 - datetime1 + delta = delta-datetime.timedelta(microseconds=delta.microseconds) + return f'{format1} to {format2} (={delta})' + + ### Now generate codes, html fragments to fill in the template to generate final report + + ## Generate HTML for table of standard performance statistics + # Read statistics in a single dict + table_data = self.read_json(f'{OUTFILE_PERF_STATS}{suffix}.json') + # add counter type data to the table as well + counters = self.read_json(f'{OUTFILE_PERF_COUNTERS}{suffix}.json') + for s in [PERF_CORR_NUM_INPUT_RECEIVED, PERF_CORR_NUM_OUTPUT_SENT]: + table_data[s] = counters[s] + column_names = list(table_data[PERF_TOTAL_MEMORY_USAGE].keys()) + + # prepare dictionary for html using more descriptive names for metrics + for key in list(table_data.keys()): + table_data[METRICS_DESCRIPTION.get(key, key)] = table_data[key] + del table_data[key] + standard_perf_stats_table = self._dict_to_html_table(table_data, column_names) + + ## Generate HTML list of test configuration + test_config_list = self._to_html_list(testConfigurationDetails) + + ## Generate HTML for any additional performance metrics + additional_perf_statistics = self._to_html_list(extraPerformanceMetrics) + if extraPerformanceMetrics: + additional_perf_statistics = f'<h4>App Specific Performance Statistics</h4>{additional_perf_statistics}' + + ## Generate HTML for graphs + # generate data for memory and queue_size graphs + queue_data = [] + memory_data = [] + cpu_usage_data = [] + start_time = -1 + end_time = -1 + with open(f'{self.output}/{OUTFILE_PERF_RAW_DATA}{suffix}.csv', 'r', encoding='utf8') as csv_file: + csv_reader = csv.DictReader(csv_file) + for row in csv_reader: + timestamp = float(row[PERF_TIMESTAMP]) + if start_time < 0: + start_time = timestamp + end_time = timestamp + timestamp_milli = 1000.0 * timestamp + iq = int(float(row[PERF_CORR_IQ_SIZE])) + oq = int(float(row[PERF_CORR_OQ_SIZE])) + # generate JavaScript code to create an array of data for a timestamp - [new Date(...), y1_value, y2_value, ...] + queue_data.append(f'[new Date({timestamp_milli}),{iq}, {oq}]') + memory = float(row[PERF_TOTAL_MEMORY_USAGE]) + memory_data.append(f'[new Date({timestamp_milli}), {memory}, {float(row[PERF_MEMORY_APCTRL])}, {float(row[PERF_MEMORY_CORR])}]') + queue_time_range = memory_time_range = format_time_range(start_time, end_time) + + # generate data for cpu usage graphs + if os.path.exists(f'{self.output}/{OUTFILE_PERF_CPU_USAGE}{suffix}.csv'): + start_time = -1 + end_time = -1 + with open(f'{self.output}/{OUTFILE_PERF_CPU_USAGE}{suffix}.csv', 'r', encoding='utf8') as csv_file: + csv_reader = csv.DictReader(csv_file) + for row in csv_reader: + timestamp = float(row[PERF_TIMESTAMP]) + if start_time < 0: + start_time = timestamp + end_time = timestamp + timestamp_milli = 1000.0 * timestamp + cpu_usage = float(row[PERF_CPU_USAGE_MILLI]) + cpu_usage_data.append(f'[new Date({timestamp_milli}), {cpu_usage}]') + cpu_usage_time_range = format_time_range(start_time, end_time) + + ### Generate final report.html file by filling in various details into the table + variation_replacements = { + 'TEST_TITLE': self.descriptor.title, + 'VARIATION_DESCRIPTION': description, + 'VARIATION_TITLE': description, + 'VARIATION_LINKS': '', + 'VARIATION_ID': '0', + 'TEST_CONFIGURATION': test_config_list, + 'STATS_TABLE': standard_perf_stats_table, + 'ADDITIONAL_PERF_STATISTICS': additional_perf_statistics, + 'CORRELATOR_QUEUES_DATA': ','.join(queue_data), + 'CORRELATOR_QUEUES_TIMERANGE': queue_time_range, + 'MEMORY_DATA': ','.join(memory_data), + 'MEMORY_TIMERANGE': memory_time_range, + 'CPU_USAGE_DATA': ','.join(cpu_usage_data), + 'CPU_USAGE_TIMERANGE': cpu_usage_time_range, + } + + self.write_text(f'html_report_data{self._perfMonitorSuffix(False)}.json', json.dumps(variation_replacements, indent=2), encoding='utf8')
    + + def _generateFinalHTMLReport(self): + """ + Generates a final HTML report. + + Combines multiple HTML reports into one. + """ + template_dir = f'{self.project.EPL_TESTING_SDK}/testframework/resources' + variation_template = pathlib.Path(f'{template_dir}/template_perf_details.html').read_text(encoding='utf8') + + files = glob.glob(f'{self.output}/html_report_data*.json') + files = sorted(files) + + variation_htmls = [] + variation_links = [] + env_details = self.read_json(f'{OUTFILE_ENV_DETAILS}.json') + for i,f in enumerate(files): + f = f.strip() + filename = os.path.basename(f) + replacements = self.read_json(filename) + variation_desc = replacements['VARIATION_DESCRIPTION'] + replacements['VARIATION_ID'] = f'variation_{i}' + replacements['VARIATION_TITLE'] = variation_desc + + variation_html = variation_template + for (key, value) in replacements.items(): + variation_html = variation_html.replace(f'@{key}@', value) + if i > 0: + variation_html = '<br/><a class="backtotoplink" href="#top_of_the_page">Back to top</a><br/><hr/>' + variation_html + variation_htmls.append(variation_html) + variation_links.append(f'<a href="#test_variation_variation_{i}">{variation_desc}</a>') + + if len(files) > 1: + if 'Uptime (secs)' in env_details: + # don't need to log uptime in combined report + del env_details['Uptime (secs)'] + + variation_links_html = '' if len(variation_links) <= 1 else f'<h2>Variation List</h2>{self._to_html_list(variation_links)}' + + replacements = { + 'TEST_TITLE': self.descriptor.title, + 'ENVIRONMENT_DETAILS': self._to_html_list(env_details), + 'VARIATION_DATA': '\n\n'.join(variation_htmls), + 'VARIATION_LINKS': variation_links_html, + } + + self.copyWithReplace(f'{template_dir}/template_perf_report.html', f'{self.output}/report.html', replacements, marker='@') + self.log.info(f'Generated performance report {os.path.abspath(os.path.join(self.output, f"report.html"))}') + +
    [docs] def validate(self): + """ + Performs standard validations of the test run. + + The following validations are performed. + + - No errors in the microservice log. + - Microservice did not terminate because of high memory usage. + - Microservice's memory usage remained below 90% of available memory. + - Correlator was not swapping memory. + + The test should define its own `validate` method for performing any application-specific validation. Ensure that + the test calls the super implementation of the `validate` method, using `super(PySysTest, self).validate()`. + """ + logFile = self.platform.getApamaLogFile() + + self.assertGrep(logFile, expr=' (ERROR|FATAL) .*', contains=False) + + # Check that microservice did not use more than 90% of available memory + self.assertGrep(logFile, expr='apama_highmemoryusage.*Apama is using 90. of available memory', contains=False) + + # Check that microservice did not exit because of high memory usage + self.assertGrep(logFile, expr='(Java exit 137|exit code 137)', contains=False) + + # Check that no request to /cep from cumulocity failed + for file in glob.glob(f'{self.output}/{OUTFILE_PERF_COUNTERS}*.json'): + perf_counters = self.read_json(file) + self.assertThat('num_failed_cep_requests == 0', num_failed_cep_requests=perf_counters[PERF_CEP_PROXY_REQ_FAILED]) + + # Check that correlator was not swapping + for file in glob.glob(f'{self.output}/{OUTFILE_PERF_STATS}*.json'): + perf_stats = self.read_json(file) + self.assertThat('correlator_swap_read_write == 0', correlator_swap_read_write=perf_stats[PERF_CORR_SPAW_RATE]['min']) + + # check that mean and median size of input and output queue are reasonable + filename = os.path.basename(file).replace(OUTFILE_PERF_STATS, OUTFILE_PERF_RAW_DATA) + raw_perf_data = self.read_json(f'{self.output}/{filename}') + for (queue, max_size) in [('input', 20_000), ('output', 10_000)]: + stats = perf_stats[PERF_CORR_IQ_SIZE if queue == 'input' else PERF_CORR_OQ_SIZE] + self.assertThat('median_queue_size < max_queue_size * 0.8', median_queue_size=stats['median'], max_queue_size=max_size) + self.assertThat('mean_queue_size < max_queue_size * 0.8', mean_queue_size=stats['mean'], max_queue_size=max_size) + self._confirmStableQueueSize(queue, raw_perf_data)
    + +
    + +
    +
    +
    + +
    +
    + + + + \ No newline at end of file diff --git a/doc/pydoc/_modules/apamax/eplapplications/platform.html b/doc/pydoc/_modules/apamax/eplapplications/platform.html index f7da0be..916861a 100644 --- a/doc/pydoc/_modules/apamax/eplapplications/platform.html +++ b/doc/pydoc/_modules/apamax/eplapplications/platform.html @@ -4,7 +4,7 @@ - apamax.eplapplications.platform — EPL Apps Tools 10.9.0.1 documentation + apamax.eplapplications.platform — EPL Apps Tools 10.9.0.6 documentation @@ -26,7 +26,7 @@

    Navigation

  • modules |
  • - +
    @@ -55,7 +55,7 @@

    Source code for apamax.eplapplications.platform

    <
    [docs]class CumulocityPlatform(object): """ - Class to create a connection to the Cumulocity platform configured in pysysproject.xml + Class to create a connection to the Cumulocity IoT platform configured in pysysproject.xml and spool the logs from the platform locally. Requires the following properties to be set in pysysproject.xml: @@ -140,24 +140,24 @@

    Source code for apamax.eplapplications.platform

    < resp = self._c8yConn.do_get("/application/applications/%s/logs/%s?dateFrom=%s&dateTo=2099-01-01T00:00:00%%2B01:00" % (self._applicationId, self._instanceName, startdate), jsonResp=False) logLatest = resp.decode('utf8').split("\n") - with open(os.path.join(self.parent.output, 'platform.log'), 'a', encoding='utf8') as log: + with open(os.path.join(self.parent.output, 'platform.log'), 'a', encoding='utf8') as logfile: for line in logLatest: if line not in logLineDeduplication: - log.write(line + "\n") + logfile.write(line + "\n") logLineDeduplication = logLineDeduplication.union(logLatest) except Exception as e: log.error("Exception while spooling logs:" + str(e))
    [docs] def shutdown(self): - """ Stop spooling the log files when the test finishes """ + """ Stop spooling the log files when the test finishes. """ self.__spoolLogs = False
    [docs] def getC8YConnection(self): - """ Return the C8yConnection object for this platform """ + """ Return the C8yConnection object for this platform. """ return self._c8yConn
    [docs] def getApamaLogFile(self): - """ Return the path to the Apama log file within Cumulocity """ + """ Return the path to the Apama log file within Cumulocity IoT.""" return os.path.join(self.parent.output, 'platform.log')
    @@ -171,7 +171,8 @@

    Table of Contents

  • Using the eplapp.py command line tool
  • Writing tests for EPL apps
  • Using PySys to test your EPL apps
  • -
  • PySys helpers for Apama EPL Apps
  • +
  • Testing the performance of your EPL apps
  • +
  • PySys helpers for EPL apps
  • diff --git a/doc/pydoc/_modules/index.html b/doc/pydoc/_modules/index.html index 673c6ef..a043072 100644 --- a/doc/pydoc/_modules/index.html +++ b/doc/pydoc/_modules/index.html @@ -4,7 +4,7 @@ - Overview: module code — EPL Apps Tools 10.9.0.1 documentation + Overview: module code — EPL Apps Tools 10.9.0.6 documentation @@ -26,7 +26,7 @@

    Navigation

  • modules |
  • - +
    @@ -39,6 +39,7 @@

    All modules for which code is available

    @@ -52,7 +53,8 @@

    Table of Contents

  • Using the eplapp.py command line tool
  • Writing tests for EPL apps
  • Using PySys to test your EPL apps
  • -
  • PySys helpers for Apama EPL Apps
  • +
  • Testing the performance of your EPL apps
  • +
  • PySys helpers for EPL apps
  • @@ -53,19 +53,21 @@

    ApamaC8YBaseTest class apamax.eplapplications.basetest.ApamaC8YBaseTest(descriptor, outsubdir, runner)[source]

    Bases: pysys.basetest.BaseTest

    -

    Base test for EPL Applications tests.

    -

    Requires the following to be set on the project in pysysproject.xml file (typically from the environment):

    +

    Base test for EPL applications tests.

    +

    Requires the following to be set on the project in the pysysproject.xml file (typically from the environment):

    • EPL_TESTING_SDK

    • -
    • APAMA_HOME - only if running a local correlator

    • +
    • APAMA_HOME - Only if running a local correlator.

    addC8YPropertiesToProject(apamaProject, params=None)[source]
    -

    adds the connection parameters into the project

    +

    Adds the connection parameters into a project.

    Parameters
    -

    params

    dictionary to override and add to those defined for the project:

    +
      +
    • apamaProject – The ProjectHelper object for a project.

    • +
    • params

      The dictionary of parameters to override and add to those defined for the project:

      <property name="CUMULOCITY_USERNAME" value="my-user"/>
       <property name="CUMULOCITY_PASSWORD" value="my-password"/>
       <property name="CUMULOCITY_SERVER_URL" value="https://my-url/"/>
      @@ -74,7 +76,24 @@ 

      ApamaC8YBaseTest<property name="CUMULOCITY_MEASUREMENT_FORMAT" value=""/>

      -

      +

    • +
    +
    +
    +
    + +
    +
    +copyWithReplace(sourceFile, targetFile, replacementDict, marker='@')[source]
    +

    Copies the source file to the target file and replaces the placeholder strings with the actual values.

    +
    +
    Parameters
    +
      +
    • sourceFile (str) – The path to the source file to copy.

    • +
    • targetFile (str) – The path to the target file.

    • +
    • replacementDict (dict[str, str]) – A dictionary containing placeholder strings and their actual values to replace.

    • +
    • marker (str, optional) – Marker string used to surround replacement strings in the source file to disambiguate from normal strings. For example, @.

    • +
    @@ -82,13 +101,17 @@

    ApamaC8YBaseTest
    createAppKey(url, username, password)[source]
    -

    Checks if the tenant has an external application defined for us and if not, creates it. -:param url: The URL to the Cumulocity tenant. -:param username: The user to authenticate to the tenant. -:param password: The password to authenticate to the tenant.

    +

    Checks if the tenant has an external application defined for us and if not, creates it.

    -
    Returns
    -

    A app key suitable for connecting a test correlator to the tenant.

    +
    Parameters
    +
      +
    • url – The URL to the Cumulocity IoT tenant.

    • +
    • username – The user to authenticate to the tenant.

    • +
    • password – The password to authenticate to the tenant.

    • +
    +
    +
    Returns
    +

    An app key suitable for connecting a test correlator to the tenant.

    @@ -96,25 +119,123 @@

    ApamaC8YBaseTest
    createProject(name, existingProject=None)[source]
    -

    Create a ProjectHelper object which mimics the Cumulocity EPL applications environment.

    -

    Adds all the required bundles and adds the properties to connect and authenticate to the configured Cumulocity tenant.

    +

    Creates a ProjectHelper object which mimics the Cumulocity IoT EPL applications environment.

    +

    Adds all the required bundles and adds the properties to connect and authenticate to the configured Cumulocity IoT tenant.

    Parameters
      -
    • name – The name of the project

    • +
    • name – The name of the project.

    • existingProject – If provided the path to an existing project. The environment will be added to that project instead of a new one.

    +
    Returns
    +

    A ProjectHelper object.

    +
    +
    +
    + +
    +
    +createTestDevice(name, type='PySysTestDevice', children=None)[source]
    +

    Creates a Cumulocity IoT device for testing.

    +
    +
    Parameters
    +
      +
    • name (str) – The name of the device. The name of the device is prefixed with PYSYS_ so that the framework can identify and clean up test devices.

    • +
    • type (str, optional) – The type of the device.

    • +
    • children (list[str], optional) – The list of device IDs to add them as children to the created device.

    • +
    +
    +
    Returns
    +

    The ID of the device created.

    +
    +
    Return type
    +

    str

    +
    +
    +
    + +
    +
    +getAlarms(source=None, type=None, status=None, dateFrom=None, dateTo=None, **kwargs)[source]
    +

    Gets all alarms with matching parameters.

    +

    For example:

    +
    self.getAlarms(type='my_alarms', dateFrom='2021-04-15 11:00:00.000Z', 
    +                                dateTo='2021-04-15 11:30:00.000Z')
    +
    +
    +
    +
    Parameters
    +
      +
    • source (str, optional) – The source object of the alarm. Get alarms for all objects if not specified.

    • +
    • type (str, optional) – The type of alarm to get. Get alarms of all types if not specified.

    • +
    • status (str, optional) – The status of the alarms to get. Get alarms of all status if not specified.

    • +
    • dateFrom (str, optional) – The start time of the alarm in the ISO format. If specified, only alarms that are created on or after this time are fetched.

    • +
    • dateTo (str, optional) – The end time of the alarm in the ISO format. If specified, only alarms that are created on or before this time are fetched.

    • +
    • **kwargs – All additional keyword arguments are treated as extra parameters for filtering alarms.

    • +
    +
    +
    Returns
    +

    List of alarms.

    +
    +
    Return type
    +

    list[object]

    +
    +
    +
    + +
    +
    +getOperations(deviceId=None, fragmentType=None, dateFrom=None, dateTo=None, **kwargs)[source]
    +

    Gets all operations with matching parameters.

    +

    For example:

    +
    self.getOperations(fragmentType='my_ops', dateFrom='2021-04-15 11:00:00.000Z', 
    +                                        dateTo='2021-04-15 11:30:00.000Z')
    +
    +
    +
    +
    Parameters
    +
      +
    • deviceId (str, optional) – The device ID of the alarm. Get operations for all devices if not specified.

    • +
    • fragmentType (str, optional) – The type of fragment that must be part of the operation.

    • +
    • dateFrom (str, optional) – The start time of the operation in the ISO format. If specified, only operations that are created on or after this time are fetched.

    • +
    • dateTo (str, optional) – The end time of the operation in the ISO format. If specified, only operations that are created on or before this time are fetched.

    • +
    • **kwargs – All additional keyword arguments are treated as extra parameters for filtering operations.

    • +
    +
    +
    Returns
    +

    List of operations.

    +
    +
    Return type
    +

    list[object]

    +
    getTestSubjectEPLApps()[source]
    -

    Retrieves a list of paths to EPL App(s) being tested. -If the user defines the <user-data name=”EPLApp” value=”EPLAppToBeTested”/> tag in the pysystest.xml file, -then we just return the EPL App defined by the tag’s value.  -If this tag is not defined (or the value is an empty string) then all the mon files in project.EPL_APPS directory are returned.

    +

    Retrieves a list of paths to the EPL apps being tested.

    +

    If the user defines the <user-data name="EPLApp" value="EPLAppToBeTested"/> tag in the pysystest.xml file, +then we just return the EPL app defined by the tag’s value. If this tag is not defined (or the value is an empty string) +then all the mon files in the project.EPL_APPS directory are returned.

    +
    + +
    +
    +getUTCTime(timestamp=None)[source]
    +

    Gets a Cumulocity IoT-compliant UTC timestamp string for the current time or the specified time.

    +
    +
    Parameters
    +

    timestamp (float, optional) – The epoc timestamp to get timestamp string for. Use current time if not specified.

    +
    +
    Returns
    +

    Timestamp string.

    +
    +
    Return type
    +

    str

    +
    +
    @@ -150,21 +271,21 @@

    LocalCorrelatorSimpleTest
    addEPLAppsToProject(eplApps, project)[source]
    -

    Adds the EPL app(s) being tested to a project.

    +

    Adds the EPL app(s) being tested to a project.

    execute()[source]

    Runs all the tests in the Input directory against the applications configured in the EPL_APPS -directory or with the EPLApps directive.

    +directory or with the EPLApps directive.

    getMonitorsFromInjectedFile(correlator, file)[source]

    Retrieves a list of active monitors in a correlator, added from a particular mon file -using GET request to http://correlator.host:correlator.port

    +using a GET request to http://correlator.host:correlator.port.

    @@ -184,7 +305,7 @@

    LocalCorrelatorSimpleTest
    validate()[source]
    -

    Checks than no errors were logged to the correlator log file.

    +

    Checks that no errors were logged to the correlator log file.

    @@ -196,19 +317,19 @@

    EPLAppsSimpleTest class apamax.eplapplications.basetest.EPLAppsSimpleTest(descriptor, outsubdir, runner)[source]

    Bases: apamax.eplapplications.basetest.ApamaC8YBaseTest

    -

    Base test for running test with no run.py on EPL apps running in Cumulocity.

    +

    Base test for running test with no run.py on EPL apps running in Cumulocity IoT.

    execute()[source]

    Runs all the tests in the Input directory against the applications configured in the EPL_APPS -directory or with the EPLApps directive using EPL applications to run each test.

    +directory or with the EPLApps directive using EPL apps to run each test.

    prepareTenant()[source]
    -

    Prepares the tenant for a test by deleting all devices created by previous tests, deleting all EPL Apps which have been uploaded by tests, and clearing all active alarms.

    -

    This is done first so that there’s no possibility existing test apps raising alarms or creating devices

    +

    Prepares the tenant for a test by deleting all devices created by previous tests, deleting all EPL apps which have been uploaded by tests, and clearing all active alarms.

    +

    This is done first so that it is not possible for existing test apps to raise alarms or create devices.

    @@ -228,13 +349,13 @@

    EPLAppsSimpleTest
    shutdown()[source]
    -

    Deactivate all EPL apps which were uploaded when the test terminates.

    +

    Deactivates all uploaded EPL apps when the test terminates.

    validate()[source]
    -

    Ensure that no errors were logged in the platform log file while we were running the test.

    +

    Ensures that no errors were logged in the platform log file while we were running the test.

    @@ -253,7 +374,8 @@

    Table of Contents

  • Using the eplapp.py command line tool
  • Writing tests for EPL apps
  • Using PySys to test your EPL apps
  • -
  • PySys helpers for Apama EPL Apps
  • diff --git a/doc/pydoc/autodocgen/apamax.eplapplications.buildVersions.html b/doc/pydoc/autodocgen/apamax.eplapplications.buildVersions.html index de9a788..6e74572 100644 --- a/doc/pydoc/autodocgen/apamax.eplapplications.buildVersions.html +++ b/doc/pydoc/autodocgen/apamax.eplapplications.buildVersions.html @@ -4,7 +4,7 @@ - apamax.eplapplications.buildVersions — EPL Apps Tools 10.9.0.1 documentation + apamax.eplapplications.buildVersions — EPL Apps Tools 10.9.0.6 documentation @@ -34,8 +34,8 @@

    Navigation

  • previous |
  • - - + +
    @@ -60,12 +60,14 @@

    Table of Contents

  • Using the eplapp.py command line tool
  • Writing tests for EPL apps
  • Using PySys to test your EPL apps
  • -
  • PySys helpers for Apama EPL Apps
  • diff --git a/doc/pydoc/autodocgen/apamax.eplapplications.connection.html b/doc/pydoc/autodocgen/apamax.eplapplications.connection.html index 4d20139..0698e38 100644 --- a/doc/pydoc/autodocgen/apamax.eplapplications.connection.html +++ b/doc/pydoc/autodocgen/apamax.eplapplications.connection.html @@ -4,7 +4,7 @@ - apamax.eplapplications.connection — EPL Apps Tools 10.9.0.1 documentation + apamax.eplapplications.connection — EPL Apps Tools 10.9.0.6 documentation @@ -34,8 +34,8 @@

    Navigation

  • previous |
  • - - + +
    @@ -53,7 +53,7 @@

    C8yConnection class apamax.eplapplications.connection.C8yConnection(url, username, password)[source]

    Bases: object

    -

    Simple object to create connection to Cumulocity and perform REST requests.

    +

    Simple object to create connection to Cumulocity IoT and perform REST requests.

    do_get(path, params=None, headers=None, jsonResp=True)[source]
    @@ -129,7 +129,8 @@

    Table of Contents

  • Using the eplapp.py command line tool
  • Writing tests for EPL apps
  • Using PySys to test your EPL apps
  • -
  • PySys helpers for Apama EPL Apps
  • diff --git a/doc/pydoc/autodocgen/apamax.eplapplications.eplapps.html b/doc/pydoc/autodocgen/apamax.eplapplications.eplapps.html index b7ed56b..d9b455d 100644 --- a/doc/pydoc/autodocgen/apamax.eplapplications.eplapps.html +++ b/doc/pydoc/autodocgen/apamax.eplapplications.eplapps.html @@ -4,7 +4,7 @@ - apamax.eplapplications.eplapps — EPL Apps Tools 10.9.0.1 documentation + apamax.eplapplications.eplapps — EPL Apps Tools 10.9.0.6 documentation @@ -16,7 +16,7 @@ - + @@ -53,7 +53,7 @@

    EPLApps class apamax.eplapplications.eplapps.EPLApps(connection)[source]

    Bases: object

    -

    Class for interacting with Apama EPL Apps in Cumulocity

    +

    Class for interacting with Apama EPL Apps in Cumulocity IoT.

    Parameters

    connection – A C8yConnection object for the connection to the platform.

    @@ -62,7 +62,7 @@

    EPLApps
    delete(name: str)[source]
    -

    Deletes an EPL app in Cumulocity.

    +

    Deletes an EPL app in Cumulocity IoT.

    Parameters

    name – The name of the EPL app to be deleted.

    @@ -73,13 +73,13 @@

    EPLApps
    deploy(file, name='', description=None, inactive=False, redeploy=False)[source]
    -

    Deploys a local mon file to Apama EPL Apps in Cumulocity.

    +

    Deploys a local mon file to Apama EPL Apps in Cumulocity IoT.

    Parameters
      -
    • file – Path to local mon file to be deployed as an EPL app

    • +
    • file – Path to local mon file to be deployed as an EPL app.

    • name – Name of the EPL app to be uploaded (optional). By default this will be the name of the mon file being uploaded.

    • -
    • description – Description of the EPL app (optional)

    • +
    • description – Description of the EPL app (optional).

    • inactive – Boolean of whether the app should be ‘active’ (inactive=False) or ‘inactive’ (inactive=True) when it is deployed.

    • redeploy – Boolean of whether we are overwriting an existing EPL app.

    @@ -112,7 +112,7 @@

    EPLApps

    includeContents – Fetches the EPL files with their contents if True. This is an optional query parameter.

    Returns
    -

    A json object of all the user’s EPL apps in Cumulocity

    +

    A json object of all the user’s EPL apps in Cumulocity IoT.

    @@ -120,7 +120,7 @@

    EPLApps
    update(name, new_name=None, file=None, description=None, state=None)[source]
    -

    Updates an EPL app in Cumulocity.

    +

    Updates an EPL app in Cumulocity IoT.

    Parameters

    diff --git a/doc/pydoc/autodocgen/apamax.eplapplications.html b/doc/pydoc/autodocgen/apamax.eplapplications.html index 974815e..8990c8f 100644 --- a/doc/pydoc/autodocgen/apamax.eplapplications.html +++ b/doc/pydoc/autodocgen/apamax.eplapplications.html @@ -4,7 +4,7 @@ - apamax.eplapplications — EPL Apps Tools 10.9.0.1 documentation + apamax.eplapplications — EPL Apps Tools 10.9.0.6 documentation @@ -17,7 +17,7 @@ - + @@ -64,7 +64,10 @@

    Navigation

    apamax.eplapplications.eplapps

    -

    apamax.eplapplications.platform

    +

    apamax.eplapplications.perf

    +

    + +

    apamax.eplapplications.platform

    @@ -82,12 +85,14 @@

    Table of Contents

  • Using the eplapp.py command line tool
  • Writing tests for EPL apps
  • Using PySys to test your EPL apps
  • -
  • PySys helpers for Apama EPL Apps
  • diff --git a/doc/pydoc/autodocgen/apamax.html b/doc/pydoc/autodocgen/apamax.html index 61b26d3..ff2bf98 100644 --- a/doc/pydoc/autodocgen/apamax.html +++ b/doc/pydoc/autodocgen/apamax.html @@ -4,7 +4,7 @@ - PySys helpers for Apama EPL Apps — EPL Apps Tools 10.9.0.1 documentation + PySys helpers for EPL apps — EPL Apps Tools 10.9.0.6 documentation @@ -17,7 +17,7 @@ - + @@ -44,7 +44,7 @@

    Navigation

    -

    PySys helpers for Apama EPL Apps

    +

    PySys helpers for EPL apps

    @@ -69,7 +69,8 @@

    Table of Contents

  • Using the eplapp.py command line tool
  • Writing tests for EPL apps
  • Using PySys to test your EPL apps
  • -
  • PySys helpers for Apama EPL Apps
  • modules |
  • - + @@ -47,6 +47,7 @@

    Index

    | G | L | M + | O | P | R | S @@ -84,8 +85,6 @@

    A

  • module
  • - - + @@ -261,6 +323,10 @@

    S

  • (apamax.eplapplications.platform.CumulocityPlatform method)
  • +
  • startMeasurementSimulator() (apamax.eplapplications.perf.basetest.EPLAppsPerfTest method) +
  • +
  • startPerformanceMonitoring() (apamax.eplapplications.perf.basetest.EPLAppsPerfTest method) +
    • apamax.eplapplications.buildVersions @@ -93,6 +92,8 @@

      A

    • module
    +
    @@ -279,6 +345,8 @@

    V

    @@ -296,7 +364,8 @@

    Table of Contents

  • Using the eplapp.py command line tool
  • Writing tests for EPL apps
  • Using PySys to test your EPL apps
  • -
  • PySys helpers for Apama EPL Apps
  • +
  • Testing the performance of your EPL apps
  • +
  • PySys helpers for EPL apps
  • @@ -80,7 +80,15 @@

    ContentsAdvanced tests -
  • PySys helpers for Apama EPL Apps
  • next |
  • - +

    @@ -80,6 +80,21 @@

    Python Module Index

        apamax.eplapplications.eplapps + + +     + apamax.eplapplications.perf + + + +     + apamax.eplapplications.perf.basetest + + + +     + apamax.eplapplications.perf.publisher +     @@ -98,7 +113,8 @@

    Table of Contents

  • Using the eplapp.py command line tool
  • Writing tests for EPL apps
  • Using PySys to test your EPL apps
  • -
  • PySys helpers for Apama EPL Apps
  • +
  • Testing the performance of your EPL apps
  • +
  • PySys helpers for EPL apps
  • @@ -72,7 +72,8 @@

    Table of Contents

  • Using the eplapp.py command line tool
  • Writing tests for EPL apps
  • Using PySys to test your EPL apps
  • -
  • PySys helpers for Apama EPL Apps
  • +
  • Testing the performance of your EPL apps
  • +
  • PySys helpers for EPL apps
  • @@ -88,7 +89,7 @@

    Navigation

  • modules |
  • - +
    @@ -60,9 +60,10 @@

    IntroductionPerformance testing EPL apps document for writing performance tests.

    -

    Creating device simulators

    +

    Creating device simulators

    All measurements and alarms in Cumulocity IoT must be associated with a source. Devices in Cumulocity IoT are represented by managed objects, each of which has a unique identifier. When sending measurement or alarm events, the source field of these events must be set to a identifier of a managed object in Cumulocity IoT. Therefore, in order to send measurements from our test EPL app, it must create a ManagedObject device simulator to be the source of these measurements.

    If you are using the PySys framework to run tests in the cloud, any devices created by your tests should be named with prefix “PYSYS_”, and have the c8y_IsDevice property. These indicators are what the framework uses to identify which devices should be deleted following a test. Note that deleting a device in Cumulocity IoT will also delete any alarms or measurements associated with that device so the cleanup from a test is done when another test is next run.

    To see how this can be done, have a look at the createNewDevice action below:

    @@ -100,7 +101,7 @@

    Creating device simulators}

    -

    This action initializes a ManagedObject (using the “PYSYS_” naming prefix and adding the c8y_IsDevice property), before sending it using a withReponse action. It then confirms that it has been successfully created using listeners for ObjectCommitted and ObjectCommitFailed events. Whenever you are creating or updating an object in Cumulocity IoT and you want to verify that the change has been successful, it is recommended that you use the withResponse action in conjunction with ObjectCommitted and ObjectCommitFailed listeners (for more information, see the information on updating a managed object in the ‘The Cumulocity IoT Transport Connectivity Plug-in’ section of the documentation). Using this approach you can easily relay when the process has completed (which is done by sending an event, DeviceCreated, in the example above), and in the event of an error you can cause the test to exit quickly.

    +

    This action initializes a ManagedObject (using the “PYSYS_” naming prefix and adding the c8y_IsDevice property), before sending it using a withResponse action. It then confirms that it has been successfully created using listeners for ObjectCommitted and ObjectCommitFailed events. Whenever you are creating or updating an object in Cumulocity IoT and you want to verify that the change has been successful, it is recommended that you use the withResponse action in conjunction with ObjectCommitted and ObjectCommitFailed listeners (for more information, see the information on updating a managed object in the ‘The Cumulocity IoT Transport Connectivity Plug-in’ section of the documentation). Using this approach you can easily relay when the process has completed (which is done by sending an event, DeviceCreated, in the example above), and in the event of an error you can cause the test to exit quickly.

    Sending events to your EPL apps

    @@ -178,7 +179,7 @@

    Querying Cumulocity IoT// Send measurement and check to see whether an alarm is raised integer measurementReqId := sendMeasurement(device.deviceId, value); - on ObjectCommited(reqId=measurementReqId) + on ObjectCommitted(reqId=measurementReqId) and not ObjectCommitFailed(reqId=measurementReqId) { send FindAlarm(reqId, {"source": device.deviceId, "type": ALARM_TYPE, "resolved": "false"}) to FindAlarm.SEND_CHANNEL; @@ -254,7 +255,8 @@

    Table of Contents

  • Using PySys to test your EPL apps
  • -
  • PySys helpers for Apama EPL Apps
  • +
  • Testing the performance of your EPL apps
  • +
  • PySys helpers for EPL apps
  • @@ -148,7 +148,8 @@

    Table of Contents

  • Writing tests for EPL apps
  • Using PySys to test your EPL apps
  • -
  • PySys helpers for Apama EPL Apps
  • +
  • Testing the performance of your EPL apps
  • +
  • PySys helpers for EPL apps