So, you want to create a new audit? Great! We’re excited that you want to add to the Lighthouse project :) The goal of this document is to help you understand what constitutes as a “good” audit for Lighthouse, and steps you can follow if you want to propose a new audit.
Lighthouse audits that surface in the report should:
An audit can return a number of different detail types.
detail type | resource | notes |
---|---|---|
'node' |
DOM element | set path to a devtoolsNodePath |
'source-location' |
Code Network Resource | use to point to specific line, column |
'code' |
N/A; freeform | render as monospace font like this |
'url' |
Network Resource | we will make it a pretty link |
'thumbnail' |
Image Resource | same as above, but we show a thumbnail |
'link' |
- | arbitrary link / url combination |
'bytes' |
- | value is in bytes but formatted as KiB |
'text'\|'ms'\|'numeric' |
- |
The following detail types accept a granularity
field:
bytes
ms
numeric
granularity
must be an integer power of 10. Some examples of valid values for granularity
:
The formatted value will be rounded to that nearest number. If not provided, the default is 0.1
(except for ms
, which is 10
).
The audit ID should be based on the noun of the subject matter that it surfaces to the user.
The filename should match the audit ID.
Policy
no-
prefixes.Examples
no-
)Audit titles vary based on report context and audit type.
title
describing the action the developer should take to fix the issue.title
and a failureTitle
that describe what the page is currently doing that resulted in a passing/failing state.Opportunity title
: “Compress large images”
Standard Audit title
: “Page works offline”
Standard Audit failureTitle
: “Page does not work offline”
#### Provide a basic description of the audit
#### How would the audit appear in the report?
<!-- How would the test look when passing? Would there be additional details available?
How would the test look when failing? What additional details are available?
If the details are tabular, what are the columns?
If not obvious, how would passing/failing be defined? -->
#### How is this audit different from existing ones?
#### What % of developers/pages will this impact?
<!-- (Estimates OK, data points preferred) -->
#### How is the new audit making a better web for end users?
<!-- (Data points preferred) -->
#### What is the resourcing situation?
<!-- Who will create the audits, write the documentation, and maintain both? -->
#### Any other links or documentation that we should check out?