linkchecker (1) - Linux Manuals
linkchecker: command line client to check HTML documents and websites for broken links
NAME
linkchecker - command line client to check HTML documents and websites for broken links
SYNOPSIS
linkchecker [options] [file-or-url]...DESCRIPTION
LinkChecker features
- •
- recursive and multithreaded checking,
- •
- output in colored or normal text, HTML, SQL, CSV, XML or a sitemap graph in different formats,
- •
- support for HTTP/1.1, HTTPS, FTP, mailto:, news:, nntp:, Telnet and local file links,
- •
- restriction of link checking with URL filters,
- •
- proxy support,
- •
- username/password authorization for HTTP, FTP and Telnet,
- •
- support for robots.txt exclusion protocol,
- •
- support for Cookies
- •
- support for HTML5
- •
- HTML and CSS syntax check
- •
- Antivirus check
- •
- a command line, GUI and web interface
EXAMPLES
The most common use checks the given domain recursively:Beware that this checks the whole site which can have thousands of URLs. Use the -r option to restrict the recursion depth.
Don't check URLs with /secret in its name. All other links are checked as usual:
Checking a local HTML file on Unix:
Checking a local HTML file on Windows:
You can skip the http:// url part if the domain starts with www.:
You can skip the ftp:// url part if the domain starts with ftp.:
Generate a sitemap graph and convert it with the graphviz dot utility:
OPTIONS
General options
- -fFILENAME, --config=FILENAME
- Use FILENAME as configuration file. As default LinkChecker uses ~/.linkchecker/linkcheckerrc.
- -h, --help
- Help me! Print usage information for this program.
- --stdin
- Read list of white-space separated URLs to check from stdin.
- -tNUMBER, --threads=NUMBER
- Generate no more than the given number of threads. Default number of threads is 100. To disable threading specify a non-positive number.
- -V, --version
- Print version and exit.
- --list-plugins
- Print available check plugins and exit.
Output options
- -DSTRING, --debug=STRING
- Print debugging output for the given logger. Available loggers are cmdline, checking, cache, gui, dns and all. Specifying all is an alias for specifying all available loggers. The option can be given multiple times to debug with more than one logger. For accurate results, threading will be disabled during debug runs.
- -FTYPE[/ENCODING][/FILENAME], --file-output=TYPE[/ENCODING][/FILENAME]
-
Output to a file linkchecker-out.TYPE,
$HOME/.linkchecker/blacklist for
blacklist output, or FILENAME if specified.
The ENCODING specifies the output encoding, the default is
that of your locale.
Valid encodings are listed at
http://docs.python.org/library/:codecs.html#standard-encodings.
The FILENAME and ENCODING parts of the none output type will be ignored, else if the file already exists, it will be overwritten. You can specify this option more than once. Valid file output types are text, html, sql, csv, gml, dot, xml, sitemap, none or blacklist. Default is no file output. The various output types are documented below. Note that you can suppress all console output with the option -o none. - --no-status
- Do not print check status messages.
- --no-warnings
- Don't log warnings. Default is to log warnings.
- -oTYPE[/ENCODING], --output=TYPE[/ENCODING]
-
Specify output type as text, html, sql,
csv, gml, dot, xml, sitemap, none or
blacklist.
Default type is text. The various output types are documented
below.
The ENCODING specifies the output encoding, the default is that of your locale. Valid encodings are listed at http://docs.python.org/library/:codecs.html#standard-encodings. - -q, --quiet
- Quiet operation, an alias for -o none. This is only useful with -F.
- -v, --verbose
- Log all checked URLs. Default is to log only errors and warnings.
- -WREGEX, --warning-regex=REGEX
-
Define a regular expression which prints a warning if it matches any
content of the checked link.
This applies only to valid pages, so we can get their content.
Use this to check for pages that contain some form of error, for example "This page has moved" or "Oracle Application error".
Note that multiple values can be combined in the regular expression, for example "(This page has moved|Oracle Application error)".
See section REGULAR EXPRESSIONS for more info.
Checking options
- --cookiefile=FILENAME
- Read a file with initial cookie data. The cookie data format is explained below.
- --check-extern
- Check external URLs.
- --ignore-url=
REGEX -
URLs matching the given regular expression will be ignored and not checked.
This option can be given multiple times.
See section REGULAR EXPRESSIONS for more info. - -NSTRING, --nntp-server=STRING
- Specify an NNTP server for news: links. Default is the environment variable NNTP_SERVER. If no host is given, only the syntax of the link is checked.
- --no-follow-url=REGEX
-
Check but do not recurse into URLs matching the given regular
expression.
This option can be given multiple times.
See section REGULAR EXPRESSIONS for more info. - -p, --password
- Read a password from console and use it for HTTP and FTP authorization. For FTP the default password is anonymous@. For HTTP there is no default password. See also -u.
- -rNUMBER, --recursion-level=NUMBER
- Check recursively all links up to given depth. A negative depth will enable infinite recursion. Default depth is infinite.
- --timeout=NUMBER
- Set the timeout for connection attempts in seconds. The default timeout is 60 seconds.
- -uSTRING, --user=STRING
- Try the given username for HTTP and FTP authorization. For FTP the default username is anonymous. For HTTP there is no default username. See also -p.
- --user-agent=STRING
-
Specify the User-Agent string to send to the HTTP server, for example
"Mozilla/4.0". The default is "LinkChecker/X.Y" where X.Y is the current
version of LinkChecker.
CONFIGURATION FILES
Configuration files can specify all options above. They can also specify some options that cannot be set on the command line. See linkcheckerrc(5) for more info.OUTPUT TYPES
Note that by default only errors and warnings are logged. You should use the --verbose option to get the complete URL list, especially when outputting a sitemap graph format.
- text
- Standard text logger, logging URLs in keyword: argument fashion.
- html
- Log URLs in keyword: argument fashion, formatted as HTML. Additionally has links to the referenced pages. Invalid URLs have HTML and CSS syntax check links appended.
- csv
- Log check result in CSV format with one URL per line.
- gml
- Log parent-child relations between linked URLs as a GML sitemap graph.
- dot
- Log parent-child relations between linked URLs as a DOT sitemap graph.
- gxml
- Log check result as a GraphXML sitemap graph.
- xml
- Log check result as machine-readable XML.
- sitemap
- Log check result as an XML sitemap whose protocol is documented at http://www.sitemaps.org/protocol.html.
- sql
- Log check result as SQL script with INSERT commands. An example script to create the initial SQL table is included as create.sql.
- blacklist
- Suitable for cron jobs. Logs the check result into a file ~/.linkchecker/blacklist which only contains entries with invalid URLs and the number of times they have failed.
- none
- Logs nothing. Suitable for debugging or checking the exit code.
REGULAR EXPRESSIONS
LinkChecker accepts Python regular expressions. See http://docs.python.org/:howto/regex.html for an introduction.An addition is that a leading exclamation mark negates the regular expression.
COOKIE FILES
A cookie file contains standard HTTP header (RFC 2616) data with the following possible names:- Host (required)
- Sets the domain the cookies are valid for.
- Path (optional)
- Gives the path the cookies are value for; default path is /.
- Set-cookie (required)
- Set cookie name/value. Can be given more than once.
Multiple entries are separated by a blank line. The example below will send two cookies to all URLs starting with http://example.com/hello/ and one to all URLs starting with https://example.org/:
Host: example.com
Path: /hello
Set-cookie: ID="smee"
Set-cookie: spam="egg"
Host: example.org
Set-cookie: baggage="elitist"; comment="hologram"
PROXY SUPPORT
To use a proxy on Unix or Windows set the $http_proxy, $https_proxy or $ftp_proxy environment variables to the proxy URL. The URL should be of the form http://[user:pass@]host[:port]. LinkChecker also detects manual proxy settings of Internet Explorer under Windows systems, and gconf or KDE on Linux systems. On a Mac use the Internet Config to select a proxy. You can also set a comma-separated domain list in the $no_proxy environment variables to ignore any proxy settings for these domains. Setting a HTTP proxy on Unix for example looks like this:
Proxy authentication is also supported:
Setting a proxy on the Windows command prompt:
1. A URL must be valid.
2. A URL must be parseable. This currently includes HTML files,
3. The URL content must be retrievable. This is usually the case
4. The maximum recursion level must not be exceeded. It is configured
5. It must not match the ignored URL list. This is controlled with
6. The Robots Exclusion Protocol must allow links in the URL to be
Note that the directory recursion reads all files in that
directory, not just a subset like index.htm*.
If you have your system configured to automatically establish a
connection to the internet (e.g. with diald), it will connect when
checking links not pointing to your local host.
Use the --ignore-url option to prevent this.
Javascript links are not supported.
If your platform does not support threading, LinkChecker disables it
automatically.
You can supply multiple user/password pairs in a configuration file.
When checking news: links the given NNTP host doesn't need to be the
same as the host of the user browsing your pages.
The return value is 1 when
Else the return value is zero.
PERFORMED CHECKS
All URLs have to pass a preliminary syntax test. Minor quoting
mistakes will issue a warning, all other invalid syntax issues
are errors.
After the syntax check passes, the URL is queued for connection
checking. All connection check types are described below.
PLUGINS
There are two plugin types: connection and content plugins.
Connection plugins are run after a successful connection to the
URL host.
Content plugins are run if the URL type has content
(mailto: URLs have no content for example) and if the check is not
forbidden (ie. by HTTP robots.txt).
See linkchecker --list-plugins for a list of plugins and
their documentation. All plugins are enabled via the linkcheckerrc(5)
configuration file.
RECURSION
Before descending recursively into a URL, it has to fulfill several
conditions. They are checked in this order:
NOTES
URLs on the commandline starting with ftp. are treated like
ftp://ftp., URLs starting with www. are treated like
http://www..
You can also give local files as arguments.
ENVIRONMENT
NNTP_SERVER - specifies default NNTP server
http_proxy - specifies default HTTP proxy server
ftp_proxy - specifies default FTP proxy server
no_proxy - comma-separated list of domains to not contact over a proxy server
LC_MESSAGES, LANG, LANGUAGE - specify output language
RETURN VALUE
The return value is 2 when
LIMITATIONS
LinkChecker consumes memory for each queued URL to check. With thousands
of queued URLs the amount of consumed memory can become quite large. This
might slow down the program or even the whole system.
FILES
~/.linkchecker/linkcheckerrc - default configuration file
~/.linkchecker/blacklist - default blacklist logger output filename
linkchecker-out.TYPE - default logger file output name
http://docs.python.org/library/codecs.html#standard-encodings - valid output encodings
http://docs.python.org/howto/regex.html - regular expression documentation
AUTHOR
Bastian Kleineidam <bastian.kleineidam [at] web.de>
COPYRIGHT
Copyright © 2000-2014 Bastian Kleineidam