Skip to main content
Version: 2506.1

Requirements

  • JDK 17 or higher

Development Environment Setup

  • Using Maven Repository
    You can use the SDK using Maven Repository. Add it to your gradle script as follows.

    implementation 'io.github.sparrow-co-ltd:sparrow-ondemand-java-sdk:2.0.0-SNAPSHOT'
  • Using Local JAR
    To use a JAR file located locally, specify the file path in the gradle script as follows.

    implementation files("{jar file path}")

Token Issuance


Initialization

Add the following code to create an OndemandClient instance for ondemand analysis requests.

// config
OndemandClientConfig config = OndemandClientConfig.builder()
.apiKey("eyJhbGciOiJIUzUxMiJ9.........")
.url("http://test.com")
.build();

// create client
OndemandClient client = new OndemandClient(config);

OndemandClientConfig is used to specify configuration values when creating an OndemandClient.

  • apiKey : Enter the token issued in step 3 for authentication during API requests
  • url : Enter the ondemand backend URL

Analysis Methods

Analysis is performed through methods of the created client. OndemandException may occur during method execution. For detailed information about exceptions, please refer to the section below.

1. Analysis Request

You can send analysis requests using the created client.

RequestInfo requestInfo = client.doAnalysis(analysisRequest : AnalysisRequest);

Parameters

  • analysisRequest Required object
    Three interfaces are provided according to analysis type: SastAnalysisRequest, ScaAnalysisRequest, and DastAnalysisRequest.

    • SastAnalysisRequest
      Downloads analysis files and performs SAST analysis

      SastAnalysisRequest request = SastAnalysisRequest.builder()
      .callbackUrl(...)
      .option(...)
      .build();
      • callbackUrl List
        Sets callbacks when events such as analysis completion and analysis status occur. For convenience, the of method and callback format classes SrcUploadCallback, CompleteCallback, ProgressCallback, and DastProgressCallback are provided.

        • url Required String
          Sets the callback URL
        • type Required List < CallbackType > Sets the callback type
        • headers List
          • key Required String
            Callback header key
          • value Required String
            Callback header value
      • memo String Optional item for recording additional information about the analysis. Output as entered.

      • option Required object
        Can set SAST analysis source code information, analysis options, etc.

        • analysisSource Required object
          Repository where files to be analyzed are stored, supporting VCS and ObjectStorage types.

          • VCS

            • url Required String
              URL of the repository where files to be analyzed are stored.
            • branch String
              Name of the branch where files to be analyzed are uploaded. If not entered, the default branch is analyzed.
            • tag String
              Tag information of the branch to be analyzed.
            • commitId String
              Commit ID information to be analyzed.
            • id String
              ID for VCS authentication.
            • password String
              Password for VCS authentication. Must be entered together with id and cannot be entered at the same time as authToken.
            • authToken String
              AuthToken for VCS authentication. Cannot be entered at the same time as id and password.
          • ObjectStorage

            • bucket Required String
              Bucket of the file to be analyzed.
            • object Required String
              Object path of the file to be analyzed.
            • endPoint Required String
              Endpoint where the bucket to be analyzed is located.
            • accessKey String
              AccessKey for authentication.
            • secretKey String
              SecretKey for authentication.
        • Other Options

    • ScaAnalysisRequest
      Downloads analysis files and performs SCA analysis

      ScaAnalysisRequest request = ScaAnalysisRequest.builder()
      .callbackUrl(...)
      .option(...)
      .build();
      • callbackUrl List
        Sets callbacks when events such as analysis completion and analysis status occur. For convenience, the of method and callback format classes SrcUploadCallback, CompleteCallback, ProgressCallback, and DastProgressCallback are provided.

        • url Required String
          Sets the callback URL
        • type Required List < CallbackType > Sets the callback type
        • headers List
          • key Required String
            Callback header key
          • value Required String
            Callback header value
      • memo String Optional item for recording additional information about the analysis. Output as entered.

      • option Required object
        Can set SCA analysis source code information, analysis options, etc.

        • analysisSource Required object
          Repository where files to be analyzed are stored, supporting VCS and ObjectStorage types.

          • VCS

            • url Required String
              URL of the repository where files to be analyzed are stored.
            • branch String
              Name of the branch where files to be analyzed are uploaded. If not entered, the default branch is analyzed.
            • tag String
              Tag information of the branch to be analyzed.
            • commitId String
              Commit ID information to be analyzed.
            • id String
              ID for VCS authentication.
            • password String
              Password for VCS authentication. Must be entered together with id and cannot be entered at the same time as authToken.
            • authToken String
              AuthToken for VCS authentication. Cannot be entered at the same time as id and password.
          • ObjectStorage

            • bucket Required String
              Bucket of the file to be analyzed.
            • object Required String
              Object path of the file to be analyzed.
            • endPoint Required String
              Endpoint where the bucket to be analyzed is located.
            • accessKey String
              AccessKey for authentication.
            • secretKey String
              SecretKey for authentication.
        • Other Options

    • DastAnalysisRequest
      Performs DAST analysis by entering the target URL for vulnerability analysis.

      DastAnalysisRequest request = DastAnalysisRequest.builder()
      .option(...)
      .build();
      • callbackUrl List
        Sets callbacks when events such as analysis completion and analysis status occur. For convenience, the of method and callback format classes SrcUploadCallback, CompleteCallback, ProgressCallback, and DastProgressCallback are provided.
        • url Required String
          Sets the callback URL
        • type Required List < CallbackType > Sets the callback type
        • headers List
          • key Required String
            Callback header key
          • value Required String
            Callback header value
      • memo String Optional item for recording additional information about the analysis. Output as entered.
      • option Required object

Return Value


Example Code

Example code for requesting SAST VCS analysis, SCA ObjectStorage analysis, and DAST analysis.


SastAnalysisRequest request = SastAnalysisReqeust.builder()
.option(
SastOptionRequest.builder()
.analysisSource(
AnalysisSourceRequest.VCS.builder()
.url("https://github.com/test/testRepo.git")
.build())
.build())
.build();

RequestInfo requestInfo = client.doAnalysis(request);



//Sca
ScaAnalysisReqeust request2 = ScaAnalysisReqeust.builder()
.callbackUrl(
Arrays.asList(
CallbackUrl.of("url",
Arrays.asList(CallbackType.ANALYSIS_PROGRESS),
Arrays.asList(CallbackHeader.of("key", "value"))
)))
.option(
ScaOptionRequest.builder()
.analysisSource(
AnalysisSourceRequest.ObjectStorage.builder()
.endPoint("endpoint")
.bucket("bucket")
.object("object")
.accessKey("accessKey")
.secretKey("secretKey")
.build())
.sbomCreatorEmail("DD")
.build())
.build();
RequestInfo requestInfo2 = client.doAnalysis(request2);
//Dast
DastAnalysisRequest request3 = DastAnalysisRequest.builder()
.option(

DastOptionRequest.builder()
.crawlerTargetSeedUrls(Arrays.asList("http://52.78.58.6:38380/dcta-for-java/absolutePathDisclosure"))
.build())
.build();
RequestInfo requestInfo3 = client.doAnalysis(request3);

Call the doAnalysis method with each tool's AnalysisRequest as a parameter.
Returns RequestInfo as a response.


2. Request Status Check

If the analysis request was successful, you can check the request status.

RequestInfo requestInfo = client.getRequest(requestId: Long)

Parameters

  • requestId Required Long
    Request ID.

Return Value



3. Analysis Status Check

If the analysis request was successful, you can check the status of the ongoing analysis. Type casting is possible with detailed information for each tool: SastAnalysisInfo, ScaAnalysisInfo, and DastAnalysisInfo.

AnalysisInfo analysisInfo = client.getAnalysis(analysisId: Long)

Parameters

  • analysisId Required Long
    Analysis ID.

Return Value


4. Analysis Result File Download

If analysis is completed, you can download the analysis result file.

client.downLoadAnalysisResult(analysisId: Long  ,filePath: String);

When you call the downLoadAnalysisResult method, the analysis result file is downloaded to the specified file path.

Parameters

  • analysisId Required Long
    Analysis ID.
  • filePath Required String
    File path for analysis download. The path must include the file name and only zip extension is supported.
    ex) /home/result.zip

Return Value

None


5. Analysis Result Reader Creation

If analysis is completed, you can receive an analysis result Reader object.

ResultReader resultReader = client.getAnalysiResultReader(analysisId: Long ,filePath: String);

When you call the getAnalysiResultReader method, the analysis result is downloaded to the path, then extracted to a directory with the analysisId in the same path. A Reader object is created for the extracted files.

Parameters

  • analysisId Required Long
    Analysis ID.
  • filePath Required String
    File path for analysis download. The path must include the file name and only zip extension is supported.
    ex) /home/result.zip

Return Value

  • ResultReader object SastResultReader, ScaResultReader, and DastResultReader are provided, and type casting is required for each tool.

    SastResultReader sastResultReader = (SastResultReader) client.getAnalysiResultReader(analysisId: Long ,filePath: String);
    ScaResultReader scaResultReader = (ScaResultReader) client.getAnalysiResultReader(analysisId: Long ,filePath: String);
    DastResultReader dastResultReader = (DastResultReader) client.getAnalysiResultReader(analysisId: Long ,filePath: String);
    • SastResultReader object

      readSummary method

      Returns analysis result summary information.

      SastSummary sastSummary = sastResultReader.readSummary();
      • Parameters None

      • Return Value

      • SastSummary

      readAsset method

      Returns analysis asset list.

      List<String> assets = sastResultReader.readAsset();
      • Parameters None

      • Return Value List< String >

      issueSize method

      Returns the total number of issue files.

        int size = sastResultReader.issueSize();
      • Parameters None

      • Return Value

      • size int Total number of issue files

      readIssue method

      Reads the issue file and returns a SastIssue list.

        List<SastIssue> sastIssues = sastResultReader.readIssue(index: int);
      • Parameters
        • index Issue file index starting from 1. The maximum index can be checked through the issueSize() method.
      • Return Value List< SastIssue >

      readWorkMessage method

      Reads messages that occur during analysis and returns a WorkMessage list.

        List<WorkMessage> workMessages = sastResultReader.readWorkMessage();
      • Parameters None

      • Return Value List< WorkMessage >

    • ScaResultReader object

      readSummary method

      Returns analysis result summary information.

        ScaSummary scaSummary = scaResultReader.readSummary();
      • Parameters None

      • Return Value ScaSummary

      readAsset method

      Returns analysis asset list.

        List<String> assets = scaResultReader.readAsset();
      • Parameters None

      • Return Value List< String >

      issueSize method

      Returns the total number of issue files.

        int size = scaResultReader.issueSize();
      • Parameters None

      • Return Value int

      readIssue method

      Reads the issue file and returns a ScaComponent list.

        List<ScaComponent> scaComponents = scaResultReader.readIssue(index: int);
      • Parameters
        • index Issue file index starting from 1. The maximum index can be checked through the issueSize() method.
      • Return Value List< ScaComponent >

      readWorkMessage method

      Reads messages that occur during analysis and returns a WorkMessage list.

        List<WorkMessage> workMessages = scaResultReader.readWorkMessage();
      • Parameters None

      • Return Value List< WorkMessage >

      getSbomPath method

      Receives sbomType and returns the SBOM path corresponding to that type.

        Path sbomPath = scaResultReader.getSbomPath(sbomType: SbomType);
      • Parameters

      • Return Value

        • sbomPath Path SBOM file path

      getLicenseNoticeHtmlPath method

      Returns the license notice file (HTML) path.

        Path path = scaResultReader.getLicenseNoticeHtmlPath();
      • Parameters None

      • Return Value

        • path Path License notice file (HTML) path

      getLicenseNoticeMarkDownPath method

      Returns the license notice file (Markdown) path.

        Path path = scaResultReader.getLicenseNoticeMarkDownPath();
      • Parameters None

      • Return Value

        • path Path License notice file (Markdown) path

      getLicenseNoticeTextPath method

      Returns the license notice file (Text) path.

        Path path = scaResultReader.getLicenseNoticeTextPath();
      • Parameters None

      • Return Value

        • path Path License notice file (Text) path
    • DastResultReader object

      readSummary method

      Returns analysis result summary information.

        DastSummary dastSummary = dastResultReader.readSummary();
      • Parameters None

      • Return Value DastSummary

      readAsset method

      Returns analysis asset list.

        List<String> assets = dastResultReader.readAsset();
      • Parameters None

      • Return Value List< String >

      issueSize method

      Returns the total number of issue files.

        int size = dastResultReader.issueSize();
      • Parameters None

      • Return Value int

      readIssue method

      Reads the issue file and returns a DastIssue list.

          List<DastIssue> dastIssues = dastResultReader.readIssue(index: int);
      • Parameters
      • index Issue file index starting from 1. The maximum index can be checked through the issueSize() method.
      • Return Value List< DastIssue >
      readWorkMessage method

      Reads messages that occur during analysis and returns a WorkMessage list.

        List<WorkMessage> workMessages = dastResultReader.readWorkMessage();
      • Parameters None

      • Return Value List< WorkMessage >


6. Stop Analysis

You can stop an ongoing analysis.

client.analysisStop(analysisId: Long);

Parameters

  • analysisId Required Long
    Analysis ID.

Return Value

None


Object Information

RequestInfo

  • requestId Long Request ID
  • accountId Long User ID
  • operationType String
    Request type
  • requestVersion String
    Analysis result format version
  • stopAnalysisId Long
    Analysis ID for which stop was requested
  • status String
    Request status
  • result String
    Request result
  • insertTime Timestamp Time when the request was registered
  • updateTime Timestamp Time when the request was last modified
  • analysisList List< AnalysisInfo > Analysis list

AnalysisInfo

  • analysisId Long Analysis ID
  • requestId Long
    Analysis request ID
  • status String
    Analysis status
  • result String
    Analysis result
  • progress Integer
    Analysis progress rate
  • toolType String
    Analysis type
  • memo String
    Analysis memo
  • startTime TimeStamp
    Analysis start time
  • endTime TimeStamp
    Analysis end time
  • issueCount Long
    Total number of issues detected in the analysis
  • issueCountRisk1 Long
    Number of issues with 'Low' risk level
  • issueCountRisk2 Long
    Number of issues with 'Medium' risk level
  • issueCountRisk3 Long
    Number of issues with 'High' risk level
  • issueCountRisk4 Long
    Number of issues with 'Critical' risk level
  • issueCountRisk5 Long
    Number of issues with 'Very Critical' risk level

ANALYSIS_STATUS

Indicates analysis status and is divided into 7 types.

  • INIT
    Indicates that initialization is in progress to perform analysis.
  • READY
    Indicates that analysis preparation is in progress after initialization is complete.
  • PRE_PROCESS
    Indicates that preprocessing for analysis is in progress.
  • ANALYSIS
    Indicates that analysis is in progress.
  • POST_PROCESS
    Indicates that result processing is in progress after analysis completion.
  • COMPLETE
    Indicates that both analysis and result processing are complete.
  • STOP
    Indicates that analysis has been stopped.

ANALYSIS_RESULT

Indicates the analysis result value.

  • SUCCESS
    Indicates that analysis was successful.
  • FAIL
    Indicates that analysis failed.
  • STOPPED Indicates that analysis was stopped.

Progress

Indicates the analysis result value.

  • SUCCESS
    Indicates that analysis was successful.
  • FAIL
    Indicates that analysis failed.
  • STOPPED Indicates that analysis was stopped.

OndemandException

OndemandException delivers exceptions through RuntimeException and is classified into two types.

  • OndmandClientException May occur when the client sends a request to Sparrow On-Demand or processes a response from Sparrow On-Demand.
    • resultCode String Contains code information about the cause of the exception.
    • message String Contains a message about the exception.
  • OndmandServerException Occurs when Sparrow On-Demand successfully receives a request but cannot process it.
    • resultCode String Contains code information about the cause of the exception.
    • message string Contains a message about the exception.
    • statusCode int Indicates the response status code.
    • validationErrors A detailed failure message returned from the server when request validation fails.

Analysis Options

Source Code Analysis Options

  • maxSourceSize Maximum source size The maximum size of the analysis target to check in source code analysis or open source analysis. If the size of the analysis target downloaded during source code analysis or open source analysis is larger than SOURCE_SIZE, the analysis is terminated. You can enter an integer between 1 and 200 (unit: MB)

  • extensions List of file extensions to analyze Source code analysis distinguishes files to include in the analysis target by extension (FILE_EXTENSION1, FILE_EXTENSION2). Files that do not match are excluded from analysis. If you enter *, all files are analyzed.

    Input examples

    • ["java", "go"]
    • ["*"]

    Tip: For compressed files, if the extension of the compressed file is included in the analysis targets, all files inside the compressed file are included in the analysis.

  • excludedPath Paths to exclude from analysis If there are files to exclude from analysis, enter the paths of those files (EXCLUDED_PATH1, EXCLUDED_PATH2, EXCLUDED_PATH3). Issues will not be detected from files included in the paths entered here.

    Input examples

    • /User/jkw/ddde
    • /home/sparrow/*
    • \*/dev/\*

    Tip: Case-insensitive and * can be used.

    • *AA*: Matches all strings containing AA
    • AA* : Matches all strings starting with AA

Open Source Analysis Options

  • maxSourceSize Maximum source size The maximum size of the analysis target to check in source code analysis or open source analysis. If the size of the analysis target downloaded during source code analysis or open source analysis is larger than SOURCE_SIZE, the analysis is terminated. You can enter an integer between 1 and 200 (unit: MB)

  • sbomTypes SBOM type list If the list is empty, SBOM is not generated. You can enter the following values:

    • SPDX 2.2: SPDX22 (.spdx), SPDX22_JSON (.json), SPDX22_SPREADSHEET (.xlsx), SPDX22_RDF (.rdf)
    • SPDX 2.3: SPDX23 (.spdx), SPDX23_JSON (.json), SPDX23_SPREADSHEET (.xlsx), SPDX23_RDF (.rdf)
    • SPDX 3.0: SPDX30_JSON (.json)
    • CycloneDX: CycloneDX14, CycloneDX15, CycloneDX16 (.json)
    • SWID: SWID (.zip)
    • NIS SBOM: NIS_CSV (.csv), NIS_PDF (.pdf), NIS_JSON (.json)
  • sbomCreatorUsername SBOM creator

  • sbomCreatorEmail SBOM creator email

Web Vulnerability Analysis Options

  • crawlerTargetSeedUrls Target URL for analysis URL to analyze. Only one can be entered.

    When entering the target URL for analysis, you must check whether external internet can access the URL and whether a firewall is not operating on the server.

  • commonRecordsLogin Login record file list Login record files are .ecl format files saved from the event clipboard in which user actions at specific URLs are recorded. They are mainly used to store ID and password information used by users to log in at specific URLs when collecting or analyzing URLs.

    When attaching a login record file, if the event clipboard recording reaches the URL where the recording started during URL collection and analysis, the user's actions stored in that file are reproduced as-is. This allows you to pass the authentication required on login pages.

  • crawlerTargetContainEntireSeed Collect only sub-paths Collect only sub-paths means collecting only paths that include all target URLs entered in the project. It is distinguished by true or false (default: true). If this option is set to true, only sub-paths including the target URL are analyzed. If this option is set to false, parent paths included in the project's target URL are also analyzed.

  • crawlerRequestAcceptLanguage Client language Sets which language is configured in the browser where the web application to be analyzed is displayed and which language the HTTP client can understand. You can enter it in locale format displayed as language_region (default: ko_KR).

  • crawlerCrawlMaxUrl Maximum number of URLs to collect

    The maximum number of URLs to collect is the maximum number of URLs that can be collected in the analysis. If too many URLs are collected, the analysis results may not be accurate. Therefore, it is recommended to specify the maximum number of URLs to collect in this option (default: 0).

    The larger the value entered in this option, the more URLs can be collected, but the analysis time for collection may also increase. The smaller the value entered in this option, the fewer URLs can be collected and the analysis time decreases. If nothing is entered, the default value of the option is 0, in which case the number of URLs that can be collected is not limited.

  • crawlerCrawlTimeout Maximum collection time

    The maximum collection time is the maximum time that URLs can be collected in the analysis. If too much time is taken, the analysis results may not be accurate. Therefore, it is recommended to specify the collection time in this option (unit: minutes, default: 0).

    The larger the value entered in this option, the analysis time for collection increases and the number of URLs that can be collected may also increase. The smaller the value entered in this option, the analysis time decreases and the number of URLs that can be collected may decrease. If nothing is entered, the default value of the option is 0, in which case the time that can be collected is not limited.

  • analyzerAnalyzeTimeout Maximum analysis time

    The maximum analysis time is the maximum time that URLs can be analyzed in the analysis. If too much time is taken, the analysis results may not be accurate. Therefore, it is recommended to specify the analysis time in this option (unit: minutes, default: 0).

    The larger the value entered in this option, the analysis time increases and the analysis results may also increase. The smaller the value entered in this option, the analysis time decreases and the analysis results may decrease. If nothing is entered, the default value of the option is 0, in which case the time that can be analyzed is not limited.

  • crawlerSkipUrl URLs to exclude from collection

    URLs to exclude from collection refer to a list of strings that skip and do not collect URLs if they contain specific words. You can enter one or more strings separated by Enter or comma (,).

    If any word in the list entered in this option is included in the URL to be collected, that URL will not be collected. However, since it is skipped just before collecting the URL, the browser may visit that URL.

  • analyzerSkipUrl URLs to exclude from analysis

    URLs to exclude from analysis refer to a list of strings that skip and do not analyze URLs if they contain specific words. You can enter one or more URLs separated by Enter or comma (,).

    If any word in the list entered in this option is included in the URL to be analyzed, that URL will not be analyzed.

    Tip: If you want to exclude page behavior from analysis rather than the entire page displayed as a URL, use the Elements to exclude from event execution option below.

  • crawlerSkipUrlSuffix URL suffix to exclude

    URL suffix to exclude refers to a list of suffixes that skip and do not collect URLs if the end of the URL contains specific words or extensions. You can enter them in extension format starting with a period (.) and separated by Enter or comma (,) (default: .js .jsx .ts .tsx .css .xml .jpg .jpeg .gif .bmp .png .ico .wma .wav .mp3 .wmv .avi .mp4 .mov .exe .zip .tar .tar.gz .7z .doc .xls .ppt .docx .xlsx .pptx .pdf .txt .csv .jar .eot .woff2 .woff .ttf .otf .apk .hwp .svg .msi).

    If any word in the list entered in this option is included at the end of the URL to be collected, that URL will not be collected. Since it is skipped by checking attribute values of HTML elements before moving to the URL, the URL is not visited, so it can be skipped before performing file downloads, etc.

  • crawlerExcludeCssSelector Elements to exclude from event execution (CSS selector)

    If any CSS selector in the list entered in this option is included in the page, events of the corresponding HTML elements and their child HTML elements will all be executed. You can enter one or more CSS selectors to exclude separated by Enter or comma (,).

    If any CSS selector in the list entered in this option is included in the page, events of the corresponding HTML elements and their child HTML elements will all not be executed. This way, you can set it so that the logout button is not clicked on the page.

  • crawlerIncludeCssSelector Elements to include for event execution (CSS selector)

    If any CSS selector in the list entered in this option is included in the page, events of the corresponding HTML elements and their child HTML elements will all be executed. You can enter one or more CSS selectors to include separated by Enter or comma (,).

    If any CSS selector in the list entered in this option is included in the page, events of the corresponding HTML elements and their child HTML elements will all be executed. This way, you can execute elements such as tags that are not included in events on the page.

  • crawlerExcludeXpath Elements to exclude from event execution (XPath)

    If any XPath in the list entered in this option is included in the page, events of the corresponding HTML elements and their child HTML elements will all not be executed. This way, you can set it so that the logout button is not clicked on the page. You can enter one or more XPaths to exclude separated by Enter or comma (,).

    If any XPath in the list entered in this option is included in the page, events of the corresponding HTML elements and their child HTML elements will all not be executed. This way, you can set it so that the logout button is not clicked on the page.

  • crawlerIncludeXpath Elements to include for event execution (XPath) If any XPath in the list entered in this option is included in the page, events of the corresponding HTML elements and their child HTML elements will all be executed. You can enter one or more XPaths to include separated by Enter or comma (,).

    If any XPath in the list entered in this option is included in the page, events of the corresponding HTML elements and their child HTML elements will all be executed. This way, you can execute elements such as tags that are not included in events on the page.

  • crawlerRequestCustomHeaders Custom HTTP headers

    Custom HTTP headers refer to a list of header names and values to be included in HTTP requests sent when collecting URLs. If you enter header names and values in this option, those headers will be added to all HTTP request messages. You can click the Add button to add one or more headers and click the trash icon to delete them.

    You must enter headers that are absolutely necessary for HTTP requests in this option. This may slow down collection speed because it sets up a proxy in the browser.

    Except for the Cookie header, if multiple headers with the same name are entered, only one of them is applied. Therefore, if you need to enter multiple values, separate the header values with ;. If a header with the same name already exists, that header is removed and the custom header is added. To use custom headers, the host of the target URL for analysis must not be set to localhost or 127.0.0.1. To analyze a web application on the local machine, you must enter the local IP address.

  • crawlerLimitUrlDepthDegree URL collection depth URL collection depth means how far the URL to be collected is from the starting URL and is distinguished by high, medium, and low. The farther the URL, the more minimum actions such as page navigation are required to reach a specific URL from the starting URL (default: medium).

    If this option is set to high, URLs far from the starting URL are also collected, but collection takes a long time. If this option is set to low, the time to collect URLs in the project is shortened, but URLs far away are not collected.

  • crawlerLimitDomDepthDegree DOM collection depth DOM collection depth means how far the DOM to be collected is from the first DOM generated at the same URL and is distinguished by high, medium, and low. The farther the DOM, the more minimum actions are required to reach a specific DOM of the same URL from the first DOM (default: medium).

    If this option is set to high, DOMs far from the first DOM generated when moving to the URL are also collected, but collection takes a long time. If this option is set to low, the time to collect DOMs in the project is shortened, but DOMs far away are not collected.

  • crawlerBrowserExplicitTimeout Event wait time Event wait time refers to the time to wait for event execution results to be reflected in the DOM each time an event is performed. You can enter a number between 0 and 5000, and if the option is not entered, the default value is 300 (unit: milliseconds, default: 300).

    The larger the value entered in this option, the more you can collect URLs of web applications that take time to reflect executed events in the DOM, but the collection speed slows down. The smaller the value entered in this option, the faster the URL collection speed, but URLs of web applications that require time when the DOM changes are not collected.

  • crawlerRequestCountPerSecond Number of HTTP requests Number of HTTP requests refers to the number of HTTP requests that can be sent per second when collecting URLs. You can enter a number between -1 and 10000, and if the option is not entered, the default value is -1, in which case the number of HTTP requests that can be sent is not limited (unit: count, default: -1).

    The larger the value entered in this option, the more HTTP requests can be sent per second, making URL collection faster, but traffic increases and the load on the target web application server may also increase. The smaller the value entered in this option, the lower the traffic, reducing the load on the target web application server, but URL collection speed slows down.

  • crawlerClientTimeout HTTP client wait time HTTP client wait time refers to the maximum time to wait when a delay occurs in the process of the HTTP client connecting to the web server, sending HTTP requests, and receiving HTTP responses to perform analysis. You can enter a number between 0 and 30000, and if the option is not entered, the default value is 3000 (unit: milliseconds, default: 3000).

    The larger the value entered in this option, the analysis proceeds normally even if delays occur due to poor network connection status with the web server. However, if disconnections from the web server occur continuously, the analysis time is likely to increase. The smaller the value entered in this option, the faster the analysis speed, but there is a possibility that URLs cannot be analyzed if delays occur due to poor network connection status with the web server.