<-- Back to the Speeches Guide

Fact Checking Policies and Procedures

Introduction

Factchecking data is an additional layer of context we add to our Public Statements, meant to measure a politician's honesty. It's also central to the BULL project, which will display the number of lies a candidate has in the database on votesmart.org.

External factcheckers produce factchecks (reports) which analyze politician(s) statements and provide a ruling as to their truthfulness. Vote Smart staff enhance this data and connect these factchecks to Vote Smart content. Those connections to factchecks then appear in BULL.

Fact-check sweeps should be conducted once a month. If a statement that has been fact-checked is not yet in our database, but it something that we can take (a tweet we haven't taken yet, or an op-ed or interview in a publication, for example), the researcher conducting the fact-check sweep should enter that statement into the database and then associate the fact-checker article to that statement. (This may lead to duplicate statements, but those will be caught in weekly quality control checks. Entering the fact-check is more important than potentially missing good data.) Information on how to take a statement is available on the Speeches Field Boxes in Admin wiki.

Guidelines for talking to the public about Bull are available on the BULL Hotline Guidelines wiki.


Factchecking Data Standards


Scope of Coverage

Statements: all those statements for which a public record can be found, and otherwise meets our criteria for speech collection (including the offices and jurisdictions covered). If the statement does meet our criteria but is not already in our public statements database, these statements should be added.

Factchecks (fact-checking reports): if a report analyzes a politician's honesty in a statement meeting our collection criteria, and finds the statement to be false or mostly false, the report will be included. Therefore we would exclude:
Core content covered per factcheck: speech_id, factchecker, URL of factchecking report, ruling


Data Sources
Factchecks are sourced from external data partners ("factcheckers"). As of March 2019 this includes the three most-prominent, independent, nonpartisan groups: FactCheck.org, Politifact, and Washington Post.

These factchecks are then associated with Project Vote Smart's Public Statements database.

The association of Project Vote Smart's public statements with factchecks are made by Project Vote Smart staff.

Politifact and Washington Post rulings are sourced from their respective sites and are not modified. Rulings for Factcheck.org factchecks are assessed individually by Vote Smart staff, based on Factcheck.org's factcheck reports.

Vote Smart staff also flag statements as "questionable" based on rulings from Politifact, Factcheck.org, and the Washington Post's factchecks and in accordance with its internal criteria for rulings.


Criteria for Rulings

A "ruling" is a standardized summary-judgment of a politician's statement attributed to a factchecker.

(Note: The numbers reflected below are NOT the same as factcheckruling_id in our database; they were previously using in spreadsheets and are only provided here as a way to understand those spreadsheets)

Project Vote Smart's rulings of Factcheck.org's factchecks:

1 = entirely false
2 = mostly false + some context
3 = 50/50 true/false + lots of additional context
4 = mostly true + some context
5 = entirely true

The following may be seen in previous work, though it does not meet our current criteria for inclusion:
11 = entirely inconsistent or full flip flop
13 = half flip flop
15 = entirely consistent or no flip flop

Politifact's rulings

The Washington Post Fact Checker's rulings (found under the Pinocchio Test heading)


Criteria for flagging a statement as "questionable" (this is done through logic and does not involve subjective consideration)

Expected Frequency of Updates
Weekly when all schedules up to date, monthly if necessary


Known Issues


Data

Representation of factchecks

Corrections

Key Tasks


The key task is to associate the following content:
In the past, Clinton (former IT Director) was able to scrape factcheck and use Politifact's API to provide CSVs. Among other data, he included the factcheck URLs, which staff would then match to speech_ids and resubmit. The plan at that time was to provide these CSVs on a monthly basis. With an IT backlog, Research staff began manually retrieving URLs from Politifact and Factcheck's respective websites.

To associate factchecking data to statements:
  1. evaluate if the factcheck report meets our criteria for inclusion (in our experience, this has been about 50%). If it doesn't fit into our normal criteria for inclusion or is a new kind of evidence, and you think it ought to be added, run it by the National Director.
  2. Find the quote being evaluated in Vote Smart's Public Statements database. Add the public statement if necessary.
  3. Relate the public statement to the factcheck in accordance with current procedure
  4. Assign a ruling if not provided clearly by the fact checker (note: this is currently done in bulk for Politifact and individually for Factcheck.org.)

Special situations:


Management of Factchecking Updates


Key Objectives:
Get our data as up-to-date as possible

Work with IT on the following:
  1. improve process so that updates of this content can be done more frequently
  2. address "Known Issues" as needed
  3. add way to input this data to Admin- this may include: adding word search capabilities for public statements; ability to associate factchecks with candidate speeches; separate section for factcheck entries including the ability to browse existing factchecks
  4. integrate Factcheck.org's API
  5. integrate data into our other web properties
  6. integrate data into our public API
  7. future development (See "Incorporating Fact-Checking Data" and "factchecking data to include" documents on the public drive->cross-department projects->possible future projects)

Pace Estimates (for associating factchecks to public statements):
beginners: recorded at 16-28 factcheck articles/hour using spreadsheet imports (sample size of 2 staff members; this includes approximately 50% of those articles marked "intentionally blank" because they were not factchecking federal active officials)



API


As of early 2014, the following content was fed through the public statements call of Version 2 of the API. It is Kristen's understanding that "ruling" was replaced with a flag of "questionable" (this should be verified):


Example snippet :
"factchecks": [
{
"factchecker": "PolitiFact.com",
"link":
"http://www.factcheck.org/2012/10/whoppers-of-2012-final-edition/",
"ruling": "entirely false"
},
{
"factchecker": "PolitiFact.com",
"link":
"http://www.factcheck.org/2012/10/dubious-denver-debate-declarations/",
"ruling": "mostly false + some context"
},



Our intention is to highlight the statements made by a candidate that were determined to be questionable or some degree of false. So, claims that were determined to be true would be excluded from the current display.

Future Applications:

Using our tagging system to identify additional speeches that a fact check could apply to.


CategoryResearch
There are no comments on this page.
Valid XHTML :: Valid CSS: :: Powered by WikkaWiki