Updates a metric alert rule. See Metric Alert Rule Types under Create a Metric Alert Rule for an Organization to see valid request body configurations for different types of metric alert rule types.
Warning: Calling this endpoint fully overwrites the specified metric alert.
A metric alert rule is a configuration that defines the conditions for triggering an alert. It specifies the metric type, function, time interval, and threshold values that determine when an alert should be triggered. Metric alert rules are used to monitor and notify you when certain metrics, like error count, latency, or failure rate, cross a predefined threshold. These rules help you proactively identify and address issues in your project.
Request body which must comply to the following JSON Schema:
{ "required" : [ "aggregate", "name", "projects", "query", "thresholdType", "timeWindow", "triggers" ], "type" : "object", "properties" : { "name" : { "maxLength" : 256, "type" : "string", "description" : "The name for the rule." }, "aggregate" : { "type" : "string", "description" : "A string representing the aggregate function used in this alert rule. Valid aggregate functions are `count`, `count_unique`, `percentage`, `avg`, `apdex`, `failure_rate`, `p50`, `p75`, `p95`, `p99`, `p100`, and `percentile`. See **Metric Alert Rule Types** under [Create a Metric Alert Rule](/api/alerts/create-a-metric-alert-rule-for-an-organization/#metric-alert-rule-types) for valid configurations." }, "timeWindow" : { "type" : "integer", "description" : "The time period to aggregate over.\n\n* `1` - 1 minute\n* `5` - 5 minutes\n* `10` - 10 minutes\n* `15` - 15 minutes\n* `30` - 30 minutes\n* `60` - 1 hour\n* `120` - 2 hours\n* `240` - 4 hours\n* `1440` - 24 hours", "enum" : [ 1, 5, 10, 15, 30, 60, 120, 240, 1440 ] }, "projects" : { "type" : "array", "description" : "The names of the projects to filter by.", "items" : { "type" : "string" } }, "query" : { "type" : "string", "description" : "An event search query to subscribe to and monitor for alerts. For example, to filter transactions so that only those with status code 400 are included, you could use `\"query\": \"http.status_code:400\"`. Use an empty string for no filter." }, "thresholdType" : { "type" : "integer", "description" : "The comparison operator for the critical and warning thresholds. The comparison operator for the resolved threshold is automatically set to the opposite operator. When a percentage change threshold is used, `0` is equivalent to \"Higher than\" and `1` is equivalent to \"Lower than\".\n\n* `0` - Above\n* `1` - Below", "enum" : [ 0, 1 ] }, "triggers" : { "type" : "array", "description" : "\nA list of triggers, where each trigger is an object with the following fields:\n- `label`: One of `critical` or `warning`. A `critical` trigger is always required.\n- `alertThreshold`: The value that the subscription needs to reach to trigger the\nalert rule.\n- `actions`: A list of actions that take place when the threshold is met. Set as an empty list if no actions are to take place.\n```json\ntriggers: [\n {\n \"label\": \"critical\",\n \"alertThreshold\": 100,\n \"actions\": [\n {\n \"type\": \"email\",\n \"targetType\": \"user\",\n \"targetIdentifier\": \"23489853\",\n \"inputChannelId\": None\n \"integrationId\": None,\n \"sentryAppId\": None\n }\n ]\n },\n {\n \"label\": \"warning\",\n \"alertThreshold\": 75,\n \"actions\": []\n }\n]\n```\nMetric alert rule trigger actions follow the following structure:\n- `type`: The type of trigger action. Valid values are `email`, `slack`, `msteams`, `pagerduty`, `sentry_app`, `sentry_notification`, and `opsgenie`.\n- `targetType`: The type of target the notification will be sent to. Valid values are `specific`, `user`, `team`, and `sentry_app`.\n- `targetIdentifier`: The ID of the target. This must be an integer for PagerDuty and Sentry apps, and a string for all others. Examples of appropriate values include a Slack channel name (`#my-channel`), a user ID, a team ID, a Sentry app ID, etc.\n- `inputChannelId`: The ID of the Slack channel. This is only used for the Slack action, and can be used as an alternative to providing the `targetIdentifier`.\n- `integrationId`: The integration ID. This is required for every action type except `email` and `sentry_app.`\n- `sentryAppId`: The ID of the Sentry app. This is required when `type` is `sentry_app`.\n- `priority`: The severity of the Pagerduty alert or the priority of the Opsgenie alert (optional). Defaults for Pagerduty are `critical` for critical and `warning` for warning. Defaults for Opsgenie are `P1` for critical and `P2` for warning.\n", "items" : { } }, "environment" : { "type" : "string", "description" : "The name of the environment to filter by. Defaults to all environments.", "nullable" : true }, "dataset" : { "type" : "string", "description" : "The name of the dataset that this query will be executed on. Valid values are `events`, `transactions`, `metrics`, `sessions`, and `generic-metrics`. Defaults to `events`. See **Metric Alert Rule Types** under [Create a Metric Alert Rule](/api/alerts/create-a-metric-alert-rule-for-an-organization/#metric-alert-rule-types) for valid configurations." }, "queryType" : { "type" : "integer", "description" : "The type of query. If no value is provided, `queryType` is set to the default for the specified `dataset.` See **Metric Alert Rule Types** under [Create a Metric Alert Rule](/api/alerts/create-a-metric-alert-rule-for-an-organization/#metric-alert-rule-types) for valid configurations.\n\n* `0` - event.type:error\n* `1` - event.type:transaction\n* `2` - None", "enum" : [ 0, 1, 2 ] }, "eventTypes" : { "type" : "array", "description" : "List of event types that this alert will be related to. Valid values are `default` (events captured using [Capture Message](/product/sentry-basics/integrate-backend/capturing-errors/#capture-message)), `error` and `transaction`.", "items" : { "type" : "string" } }, "comparisonDelta" : { "type" : "integer", "description" : "An optional int representing the time delta to use as the comparison period, in minutes. Required when using a percentage change threshold (\"x%\" higher or lower compared to `comparisonDelta` minutes ago). A percentage change threshold cannot be used for [Crash Free Session Rate](/api/alerts/create-a-metric-alert-rule-for-an-organization/#crash-free-session-rate) or [Crash Free User Rate](/api/alerts/create-a-metric-alert-rule-for-an-organization/#crash-free-user-rate)." }, "resolveThreshold" : { "type" : "number", "description" : "Optional value that the metric needs to reach to resolve the alert. If no value is provided, this is set automatically based on the lowest severity trigger's `alertThreshold`. For example, if the alert is set to trigger at the warning level when the number of errors is above 50, then the alert would be set to resolve when there are less than 50 errors. If `thresholdType` is `0`, `resolveThreshold` must be greater than the critical threshold. Otherwise, it must be less than the critical threshold.", "format" : "double" }, "owner" : { "type" : "string", "description" : "The ID of the team or user that owns the rule.", "nullable" : true }, "monitorType" : { "minimum" : 0, "type" : "integer", "description" : "Monitor type represents whether the alert rule is actively being monitored or is monitored given a specific activation condition." }, "activationCondition" : { "minimum" : 0, "type" : "integer", "description" : "Optional int that represents a trigger condition for when to start monitoring", "nullable" : true } } }
Specify how the response should be mapped to the table output. The following formats are available:
Structured Table: Returns a parsed table with data split into rows and columns.
Raw Response: Returns the raw response in a single row with the following columns:
You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.
To use this node in KNIME, install the extension Sentry Nodes from the below update site following our NodePit Product and Node Installation Guide:
A zipped version of the software site can be downloaded here.
Deploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.
Try NodePit Runner!Do you have feedback, questions, comments about NodePit, want to support this platform, or want your own nodes or workflows listed here as well? Do you think, the search results could be improved or something is missing? Then please get in touch! Alternatively, you can send us an email to mail@nodepit.com.
Please note that this is only about NodePit. We do not provide general support for KNIME — please use the KNIME forums instead.