Page tree
Skip to end of metadata
Go to start of metadata

for Web Viaworks Connector


Supported version

Not applicable.


  • The ViaWorks Web Connector is an early alpha version. For feature requests please contact 
  • Doesn't support any authentication methods.
  • Doesn't render JavaScript before processing

Permissions needed

No permissions needed unless the pages have authentication. If there is authentication the index permissions needed is determined by the authentication method.

Other information

Pluggable Architecture

The ViaWorks Web Connector have a pluggable architecture. Custom plugins can be written to solve for example Forms Authentication etc. Each connection can have its own set of plug-ins. Custom configuration settings can
be passed to the plug-ins with the settings Custom 1 ... Custom 10 parameters. Note: For sensitive data, please pass as encrypted data. Please contact for more information about custom plugins.


Pages will be indexed with the everyone SID (Active Directory).


Set EnableStorePagesToDiskForDebugging=true to save the raw web pages to disk. 

Best Practices

Ensure you won't index in the navigation area that's usually present on each and every page.


Page 1

  • Connection name
  • Add URLs to index

Page 2

  • Settings (no reason to change)

Page 3

Page 4




Data Type



Crawl Timeout In SecondsMaximum seconds before the crawl times out and stops.int0 (disabled)Great for demos if you just want to crawl some pages before halting the crawl.
Downloadable Content TypesMIME types do you want to download.string[]text/html
Http Request Max Auto RedirectsMaximum number of automatic redirects that the HTTP request follows.int3
Http Request Timeout In SecondsWhen will the HTTP request timeout (in seconds).int300 = disabled
Http Service Point Connection Limit

Number of concurrent http(s) connections can be open to the same host.

Is External Page Crawling EnabledAre we allowed to crawl external pagesboolfalse
Is External Page Links Crawling EnabledWhether pages external to the root URL should have their links crawled.boolfalse"Is External Page Crawling Enabled" must be true for this setting to have any effect.
Is Forced Link Parsing Enabled

Whether the crawler should parse the page's links even if a
crawl decision determines that those links will not be crawled.

Is Http Request Auto Redirects EnabledWhether the request should follow redirectionbooltrue
Is Http Request Automatic Decompression EnabledWhether gzip and deflate will be automatically accepted and decompressed.boolfalse
Is Ignore Robots.Txt If Root Disallowed EnabledIf true, will ignore the robots.txt file if it disallows crawling the root uri.boolfalse
Is Respect Anchor Rel No Follow EnabledWhether the crawler should ignore links that have a <a href="whatever" rel="nofollow">booltrue
Is Respect HttpXRobots TagHeader NoFollow EnabledWhether the crawler should ignore links on pages that have an http X-Robots-Tag header of nofollowboolfalse
Is Respect Meta Robots No Follow EnabledWhether the crawler should ignore links on pages that have a <meta name="robots" content="nofollow" /> tagbooltrueSee
Is Respect Url Named Anchor Or Hashbang EnabledWhether or not url named anchors or hashbangs are considered part of the url. If false, they will be ignored. If true, they will be considered part of the url.boolfalse
Is Respect Robots.Txt EnabledWhether the crawler should retrieve and respect the robots.txt file.booltrue
Is Uri Recrawling EnabledWhether Uris should be crawled more than once.boolfalseThis is not common and should be false for most scenarios.
Is Ssl Certificate Validation EnabledWhether or not to validate the server SSL certificate. If true, the default validation will be made. If false, the certificate validation is bypassed.booltrueThis setting is useful to crawl sites with an invalid or expired SSL certificate.
Build page if canonical is not pointing to urlUsed to skip indexing of pages where the rel canonical is not equal to the page url.booltrueIf this custom value is false, check rel canonical and don't build page if canonical is different from the page url. Can be used to avoid page duplication.
Max Concurrent ThreadsMax concurrent threads to use for http(s) requests.int5
Max Crawl DepthMaximum levels below root page to crawlint100

If value is 0, the homepage will be crawled but none of its links will be crawled.

If the level is 1, the homepage and its links will be crawled but none of the links links will be crawled.

Max Links Per PageMaximum links to crawl per page.int0If value is zero, this setting has no effect
Max Memory Usage Cache Time In Seconds

The max amount of time before refreshing the value used to determine the amount of memory being used by the process that hosts the crawler instance.

int5 minutes (300)
Max Memory Usage In MBThe max amount of memory to allow the process to use.int500

If this limit is exceeded the crawler will stop prematurely.
If zero, this setting has no effect.

Max Page Size In BytesMaximum size of pageint

10 MB


If the page size is above this value, it will not be downloaded or processed.

If zero, this setting has no effect.

Max Pages To Crawl Per DomainMaximum number of pages to crawl per domain.int0If zero, this setting has no effect
Max Retry CountThe max number of retries for a URL if a web exception is encountered.int3If zero, no retries will be made.
Max Robots.Txt Crawl Delay In SecondsThe maximum number of seconds to respect in the robots.txt "Crawl-delay: X" directive.int1

"Is Respect Robots.Txt Enabled" setting must be true for this to have any effect.

If zero, will use whatever the robots.txt crawl delay requests no matter how high the value is.

Min Available Memory Required In MB

Uses closest multiple of 16 to the value set. If there is not at least this much memory available before starting a crawl,
throws InsufficientMemoryException.

Min Crawl Delay Per Domain MillisecondsThe number of milliseconds to wait in between http requests to the same domain.int0
Min Retry Delay In MillisecondsThe minimum delay between a failed http request and the next

10 seconds


Robots.Txt User Agent StringThe user agent string to use when checking robots.txt file for specific directives.stringMozilla/5.0 (Windows NT 6.3; rv:36.0) Gecko/20100101 Firefox/36.0Some examples of other crawler's user agent values are "googlebot", "slurp" etc...
Seed urlsThe URLs used to seed the crawlerstring[]

User Agent StringThe user agent string to use for http(s) requests.stringMozilla/5.0 (Windows NT 6.3; rv:36.0) Gecko/20100101 Firefox/36.0
Custom 1Custom setting 1 that will also be passed to plug-insstring


Custom 10Custom setting 10 that will also be passed to plug-insstring

Custom Plugin PathRelative path to custom plug-ins.stringCustom
Enable Store Pages To Disk For DebuggingStores the pages it downloads to disk.boolfalseFor debugging purposes only.
Persist state to diskPersist the state of the web crawler to disk.booltrueThis reduces the memory pressure on the crawler.

Example of other MIME types

  • application/epub+zip, application/msword, application/octet-stream, application/oebps-package+xml, application/onenote, application/pdf, application/postscript, application/rtf, application/ssml+xml, application/vnd.bmi, application/vnd.hp-pclxl, application/vnd.kde.kpresenter, application/vnd.kde.kspread, application/vnd.kde.kword, application/vnd.micrografx.flo, application/, application/, application/, application/, application/, application/, application/, application/, application/, application/, application/, application/, application/, application/vnd.oasis.opendocument.image, application/vnd.oasis.opendocument.image-template, application/vnd.oasis.opendocument.presentation, application/vnd.oasis.opendocument.presentation-template, application/vnd.oasis.opendocument.spreadsheet, application/vnd.oasis.opendocument.spreadsheet-template, application/vnd.oasis.opendocument.text, application/vnd.oasis.opendocument.text-template, application/vnd.oasis.opendocument.text-web, application/vnd.openxmlformats-officedocument.presentationml.presentation, application/vnd.openxmlformats-officedocument.presentationml.slide, application/vnd.openxmlformats-officedocument.presentationml.slideshow, application/vnd.openxmlformats-officedocument.presentationml.template, application/vnd.openxmlformats-officedocument.spreadsheetml.template, application/vnd.openxmlformats-officedocument.wordprocessingml.document, application/vnd.openxmlformats-officedocument.wordprocessingml.template, application/vnd.stardivision.calc, application/vnd.stardivision.draw, application/vnd.stardivision.impress, application/vnd.stardivision.writer, application/vnd.stardivision.writer-global, application/vnd.sun.xml.calc, application/vnd.sun.xml.calc.template, application/vnd.sun.xml.draw, application/vnd.sun.xml.draw.template, application/vnd.sun.xml.impress, application/vnd.sun.xml.impress.template, application/vnd.sun.xml.writer, application/, application/vnd.sun.xml.writer.template, application/vnd.svd, application/vnd.visio, application/vnd.wordperfect, application/vnd.xara, application/voicexml+xml, application/x-7z-compressed, application/x-abiword, application/x-bzip, application/x-bzip2, application/x-gtar, application/xhtml+xml, application/x-latex, application/xml, application/x-mscardfile, application/x-msmetafile, application/x-mspublisher, application/x-mswrite, application/x-rar-compressed, application/x-tar, application/x-tex, application/x-texinfo, application/zip, image/bmp, image/cgm, image/g3fax, image/gif, image/ief, image/jpeg, image/png, image/prs.btif, image/svg+xml, image/tiff, image/vnd.adobe.photoshop, image/vnd.dwg, image/vnd.dxf, image/, image/vnd.xiff, image/webp, image/x-cmu-raster, image/x-pcx, image/x-pict, image/x-portable-anymap, image/x-portable-bitmap, image/x-rgb, image/x-xbitmap, Lotus 1-2-3, Lotus Wordpro, message/rfc822, text/csv, text/html, text/plain, text/richtext, text/sgml, text/tab-separated-values, text/x-uuencode, text/x-vcalendar, text/x-vcard

Most common MIME types (PDF/Office documents/Html/txt)

application/msword,application/vnd.openxmlformats-officedocument.wordprocessingml.document,application/vnd.openxmlformats-officedocument.wordprocessingml.template,text/html,application/pdf,application/,application/vnd.openxmlformats-officedocument.presentationml.presentation,application/rtf,text/plain,application/, application/vnd.openxmlformats-officedocument.spreadsheetml.sheet



Content types

The connector can index following content types.

  • Pages. See MIME types.


Default refiners.


Preview of html files needs to be enabled.  Add "html" to the list of preview types under "previewApplications", "previewApp" and "add AppName="Document", like such:

<add AppName="document" Action="preview" Script="DisplayLink" ExtList="txt;doc;docx;dotx;docm;docxm;dot;pdf;cs;css;js;fax;xml;xls;xlsm;xlsx;xlsxm;xlt;xltm;xltx;xps;msg;html" DocTypeList="" SkipRootExtList="" SkipRootDocTypeList="" Priority="20"></add>. 

Information about how to change default preview types is described here Changing the Default Preview Settings.


Open is the URL of the page or file.


No Authorization

Custom Templates

Hover: File

Item: File

  • No labels