Create Mapping Enrichment Table Using CSV

Uploads an enrichment map

🚧

Authentication

All BigPanda APIs require Bearer Token Authorization in the call headers.

This API uses the User API Key type of Authorization token.

Use this API to create a new table for a mapping enrichment or to completely replace the existing table. Send the entire table as comma-separated values (CSV) in the body of the request or in a CSV file.

🚧

Tag Limitations

To maintain quality of service, BigPanda limits the number of alert tags and enrichment items available. Each organization can have:

  • 1000 alert tags
  • 500 enrichment items per alert tag
  • 20,000 alert enrichment items total
  • 200 mapping enrichment results tags

If more alert tags or enrichment items are needed, we recommend exploring normalization options to help streamline your alert data and improve incident quality.

🚧

Naming Limitations

Some words are already used for tagging and backend functions in BigPanda. These words may have limited functionality within BigPanda when used as tag names.

When creating new alert or incident tags, we recommend users use an alternate name (i.e. "short_description") for the tag to bring that data into the BigPanda system.

To see the full list of naming limitations, refer to Tag Naming.

Sample Calls

curl --request POST \
     --url https://api.bigpanda.io/resources/v2.1/mapping-enrichment/{mapping_enrichment_id}/map \
     --header 'Authorization: Bearer <User API Key>' \
     --header 'Content-Type: text/csv; charset=utf8' \
     --data-binary @synthetic_monitor_playbook.csv
curl --request POST \
     --url https://eu-api.bigpanda.io/resources/v2.1/mapping-enrichment/{mapping_enrichment_id}/map \
     --header 'Authorization: Bearer <User API Key>' \
     --header 'Content-Type: text/csv; charset=utf8' \
     --data-binary @synthetic_monitor_playbook.csv

Upload Options

The mapping table can be copied into the body of the call, or uploaded as a separate CSV file. To upload a separate CSV, replace --data with --data-binary in the call parameters.

❗️

Table as CSV in Request Body

The payload must end with an empty new line. Make sure there are no spaces before the closing quotation marks on the last line.

❗️

Table in Separate CSV File

The CSV file must use standard line feed characters for line endings and must end with an empty new line. Some programs use different line ending formats. If you receive the following error, you may need to convert the line endings or add an empty new line to your file.

Stream finished but there was a truncated final frame in the buffer

Table Requirements

The data table must meet these requirements:

  • The structure matches the mapping schema definition. For example, the column names must match the titles in the schema definition. Similarly, the table must include all of the columns in the definition
  • The table contains at least two rows—the title row and at least one row of enrichment values
  • Each row is unique; the table must not contain duplicate rows
  • The field values do not exceed 32K in length

❗️

No Line Breaks

New line characters (\n or \r) or line breaks are not supported.

🚧

CSV size

The CSV file cannot be larger than 512 MB.

🚧

Enriched Tag for Analytics

If your organization would like to accurately visualize enrichment rates in Unified Analytics, you must include 2 enrichment flag columns in the CSV:

  • An enriched or enrichment column with true/false rows, where true means alerts that match on that row should be considered enriched.
  • A <map name> column with enriched or enrichment in each row (whichever was the name of the previous column), to add the enriched quality to any alert matched on the map.

Because it is a potentially long-running action, the table upload is performed asynchronously. Therefore, the immediate response indicates only whether the request was properly formatted and, if it was, provides a URL for checking the status of the upload.

The entire table upload must complete successfully for the changes to take effect; the API does not support partial success.

A typical asynchronous upload negotiation consists of these steps:

  1. Upload the table.
    A Job object is created and a URL for checking the status is returned.
  2. Use the URL to periodically check the job status until it is set to done or failed.
  3. If the job was not successful, you can retry the request.
    If necessary, debug any connectivity issues or data formatting issues that may have contributed to the failed upload. For example, ensure the CSV file follows the enrichment schema definition.

Returns

If the request is valid, the Job object for the table upload and the following header:

ItemDescription
locationURL to check the status of the upload via the API, which includes the unique system id for the upload job.
Language