The Brand Safety data point reports which impressions appeared alongside negative content that might be harmful to your brand.
To ensure brand safety, you can do the following:
- Before you implement a media buy, make sure to identify all negative categories that you consider as inappropriate.
- In programmatic environments, see Peer39 brand safety categories to filter negative content from the pool of impressions for which you bid.
- To identify offending sites, check the Verification dashboard often.
- Set up White Lists to vet sites with pages that you do not want to consider as negative.
For more information about Peer39 semantic technology and negative content, see OVERVIEW: Peer39 Semantic Classification.
Important: Brand Safety categories are available in varying degrees for pre-bid targeting, creative targeting, and post-buy verification. See REFERENCE: Media Relevance Metrics and Categories Grid and REFERENCE: Media Relevance Metrics and Categories Glossary for more information.
Unsafe Impressions and White List are two additional features that are related to Brand Safety categories.
- Unsafe Impressions: Unsafe Impressions is the number of impressions that were classified with at least one Brand Safety category. This metric considers all Brand Safety categories, excluding overlap. The percentage is calculated from served impressions.
The Peer39 algorithm may classify the same page impressions with multiple negative categories. For example, a page contains a story about a car crash during an earthquake. The Brand Safety categories in this page include Accidents (car crash) and Disasters (earthquake). If an impression appears on this page, Verification reporting shows two counters for this impression - one for Accidents and one for Disasters. However, the Unsafe Impressions impression count is one.
- Black List: A Black List contains a list of domains that clients want to avoid; Verification can report impressions from these domains as Black List metric, which is available in the dashboard and Excel reporting.
To create a Black List metric, you need to open a ticket with Sizmek support. In the ticket (or email message to Support), include the following information:
- Account name and ID
- Attach an Excel file with your list of domains
Sizmek customer support will implement your request within five business days.
After setup, the Black List metric will appear under the General category type in the Verification dashboard.
- White List: A White List contains a list of domains that you consider non-negative; Verification will not report impressions from these domains as negative impressions.
For example, SiteA contains content that Peer39 classifies as Mature, but you prefer not to consider this site as Mature. You can define a White List for your account or advertiser that contains SiteA to ensure that Verification does not classify impressions from SiteA as Mature in reporting.
Negative categories statistics for impressions from domains in the White List are not reported as negative in Verification. The platform does not ignore impressions from domains in the White List; rather, it includes these impressions in the general statistics for the campaign, but excludes them from the Negative categorization calculation in reporting.
If you want to create a White List, you need to open a ticket with Sizmek Support. In the ticket (or email message to Support), include the following information:
- Account name and ID OR Advertiser name and ID
- Attach an Excel file with your list of domains
Sizmek Customer Support will implement your request within five business days.
Brand Safety Categories
These are the Brand Safety categories that Peer39 recognizes:
|Accidents||Content about accidents by vehicle, physical accidents, and accidents at work or during leisure. Accidents could happen to a person, a property, or nature.|
|Alcohol||Content about the negative aspects of alcohol and drinking - any related diseases, news about DUI's, stories of people getting drunk, associated violence, birth defects, and other similar topics.|
|Crime||Content about crime and/or wrongdoing against humans, property, or companies. This includes content about committing a crime or an attempt to commit a crime.|
|Death||Category that will identify a story about death or suicide. Content about people (or groups of people) who have died or are dying. Also includes funerals, wakes, memorials, philosophy about death, discussions about death, preparations for death, and life after death.|
|Disaster||Content about naturally occurring disasters and/or forces of nature that cause damage and/or pain, usually involving casualties.|
|Drugs||Content about narcotics, including drug culture, drug busts, drug effects on people, drug rehab, and stories about people who take drugs. This includes addictions of any kinds (for example, pain killers).|
|Firearms||Content about firearms, ammunition, shooting, gun laws, the National Rifle Association, and other similar topics.|
|Gambling||Content about gambling, betting casinos, online gambling, and gambling-related news.|
Pages classified as Safe From were verified by the Peer39 algorithm and confirmed to be free from the specified objectionable content. Examples include:
Mature content is any content that deals with sex of any degree.
"Sexually explicit material whose primary purpose is designed to produce sexual arousal. Types of content can include, but is not limited to, representations of sexual acts and exposed body parts, sexual coercion, and illegal sexual acts."
to discussions of medical issues and lighter discussion from the women's magazine-type tips on improving your sex life.
Content such as pornography (and "light" pornography) , men's cheap entertainment , pedophiles sex related crime, nudity and semi nudity (i.e. topless men/women), sex tips, sex education, medical sex/diseases, women/men in bikinis or lingerie and any inappropriate/provocative content that has mature nature (for example, nipple slip, “censored” provocative pictures, sex tape-related content--not necessarily the tape itself).
Content about the following categories:
|Profanity and Hate Speech||Web pages and sites with content related to excessive or inappropriate use of profane language, cursing, pejorative language, swearing, expletive, strong and obscenity language, blasphemy, foul language, adult language or relating to social behavior which is socially constructed or interpreted as insulting, rude, vulgar, desecrating, or showing disrespect. Also includes hate and crude humor.|
|Tobacco||Content about Tobacco, the Tobacco industry or, Tobacco's negative impact such as diseases.|
|Torrent||Content about file sharing sites and pages where users will download music, movies or TV shows illegally.|
|War and Terror||Content about terrorism, terrorists, and terror attacks. Includes any attempts of terror attacks or searches to find terrorism, or interaction with a terrorist regime or place.|
Peer39 categorizes Brand Safety content in various languages. Future releases will include additional categorizations in other languages. For more information, see Language Categories.