1. 7.6 Speculative loading
      1. 7.6.1 Speculation rules
        1. 7.6.1.1 Data model
        2. 7.6.1.2 Parsing
        3. 7.6.1.3 Processing model
      2. 7.6.2 Navigational prefetching
      3. 7.6.3 The ` Speculation-Rules ` header
      4. 7.6.4 The ` Sec-Speculation-Tags ` header
      5. 7.6.5 Security considerations
        1. 7.6.5.1 Cross-site requests
        2. 7.6.5.2 Injected content
        3. 7.6.5.3 IP anonymization
      6. 7.6.6 Privacy considerations
        1. 7.6.6.1 Heuristics and optionality
        2. 7.6.6.2 State partitioning
        3. 7.6.6.3 Identity joining
    2. 7.7 The ` X-Frame-Options ` header
    3. 7.8 Text directives and URL fragments
      1. 7.8.1 Introduction
      2. 7.8.2 Link lifetime
      3. 7.8.3 Exposure to script
      4. 7.8.4 Applying directives to a document
        1. 7.8.4.1 URLs in UA features
          1. 7.8.4.1.1 Location bar
          2. 7.8.4.1.2 Bookmarks
          3. 7.8.4.1.3 Sharing
      5. 7.8.5 Supporting concepts
      6. 7.8.6 Syntax
      7. 7.8.7 Parsing and processing model
        1. 7.8.7.1 Parsing
        2. 7.8.7.2 Finding and invoking text directives
        3. 7.8.7.3 Word boundaries
      8. 7.8.8 Generating text fragment directives
        1. 7.8.8.1 Prefer exact matching to range-based
        2. 7.8.8.2 Use context only when necessary
        3. 7.8.8.3 Determine if fragment ID is needed
      9. 7.8.9 Security and privacy considerations
        1. 7.8.9.1 Scroll on navigation
        2. 7.8.9.2 Search timing
        3. 7.8.9.3 Restricting the text fragment
        4. 7.8.9.4 Restricting scroll on load
    4. 7.9 The ` Refresh ` header
    5. 7.9 7.10 Browser user interface considerations

7.6 Speculative loading

Speculative loading is the practice of performing navigation actions, such as prefetching, ahead of navigation starting. This makes subsequent navigations faster.

Developers can initiate speculative loads by using speculation rules . User agents might also perform speculative loads in certain implementation-defined scenarios, such as typing into the address bar.

7.6.1 Speculation rules

Speculation rules are how developers instruct the browser about speculative loading operations that the developer believes will be beneficial. They are delivered as JSON documents, via either:

The following JSON document is parsed into a speculation rule set specifying a number of desired conditions for the user agent to start a referrer-initiated navigational prefetch :

{
  "prefetch": [
    {
      "urls": ["/chapters/5"]
    },
    {
      "eagerness": "moderate",
      "where": {
        "and": [
          { "href_matches": "/*" },
          { "not": { "selector_matches": ".no-prefetch" } }
        ]
      }
    }
  ]
}

A JSON document representing a speculation rule set must meet the following speculation rule set authoring requirements :

A valid speculation rule is a JSON object that meets the following requirements:

A valid document rule predicate is a JSON object that meets the following requirements:

A valid URL pattern input is either:

7.6.1.1 Data model

A speculation rule set is a struct with the following items :

In the future, other rules will be possible, e.g., prerender rules. See Prerendering Revamped for such not-yet-accepted extensions. [PRERENDERING-REVAMPED]

A speculation rule is a struct with the following items :


A document rule predicate is one of the following:

A document rule conjunction is a struct with the following items :

A document rule disjunction is a struct with the following items :

A document rule negation is a struct with the following items :

A document rule URL pattern predicate is a struct with the following items :

A document rule selector predicate is a struct with the following items :


A speculation rule eagerness is one of the following strings :

" immediate "

The developer believes that performing the associated speculative loads is very likely to be worthwhile, and they might also expect that load to require significant lead time to complete. User agents should usually enact the speculative load candidate as soon as practical, subject only to considerations such as user preferences, device conditions, and resource limits.

" eager "

User agents should enact the speculative load candidate on even a slight suggestion that the user may navigate to this URL in the future. For instance, the user might have moved the cursor toward a link or hovered it, even momentarily, or paused scrolling when the link is one of the more prominent ones in the viewport. The author is seeking to capture as many navigations as possible, as early as possible.

" moderate "

User agents should enact the candidate if user behavior suggests the user may navigate to this URL in the near future. For instance, the user might have scrolled a link into the viewport and shown signs of being likely to click it, e.g., by moving the cursor over it for some time. The developer is seeking a balance between " eager " and " conservative ".

" conservative "

User agents should enact the candidate only when the user is very likely to navigate to this URL at any moment. For instance, the user might have begun to interact with a link. The developer is seeking to capture some of the benefits of speculative loading with a fairly small tradeoff of resources.

A speculation rule eagerness A is less eager than another speculation rule eagerness B if A follows B in the above list.

A speculation rule eagerness A is at least as eager as another speculation rule eagerness B if A is not less eager than B .


A speculation rule tag is either an ASCII string whose code points are all in the range U+0020 to U+007E inclusive, or null.

This code point range restriction ensures the value can be sent in an HTTP header with no escaping or modification.


A speculation rule requirement is the string " anonymous-client-ip-when-cross-origin ".

In the future, more possible requirements might be defined.

7.6.1.2 Parsing

Since speculative loading is a progressive enhancement, this standard is fairly conservative in its parsing behavior. In particular, unknown keys or invalid values usually cause parsing failure, since it is safer to do nothing than to possibly misinterpret a speculation rule.

That said, parsing failure for a single speculation rule still allows other speculation rules to be processed. It is only in the case of top-level misconfiguration that the entire speculation rule set is discarded.

To parse a speculation rule set string given a string input , a Document document , and a URL baseURL :

  1. Let parsed be the result of parsing a JSON string to an Infra value given input .

  2. If parsed is not a map , then throw a TypeError indicating that the top-level value needs to be a JSON object.

  3. Let result be a new speculation rule set .

  4. Let tag be null.

  5. If parsed [" tag "] exists :

    1. If parsed [" tag "] is not a speculation rule tag , then throw a TypeError indicating that the speculation rule tag is invalid.

    2. Set tag to parsed [" tag "].

  6. Let typesToTreatAsPrefetch be « " prefetch " ».

  7. The user agent may append " prerender " to typesToTreatAsPrefetch .

    Since this specification only includes prefetching, this allows user agents to treat requests for prerendering as requests for prefetching. User agents which implement prerendering, per the Prerendering Revamped specification, will instead interpret these as prerender requests. [PRERENDERING-REVAMPED]

  8. For each type of typesToTreatAsPrefetch :

    1. If parsed [ type ] exists :

      1. If parsed [ type ] is a list , then for each rule of parsed [ type ]:

        1. Let rule be the result of parsing a speculation rule given rule , tag , document , and baseURL .

        2. If rule is null, then continue .

        3. Append rule to result 's prefetch rules .

      2. Otherwise, the user agent may report a warning to the console indicating that the rules list for type needs to be a JSON array.

  9. Return result .

To parse a speculation rule given a map input , a speculation rule tag rulesetLevelTag , a Document document , and a URL baseURL :

  1. If input is not a map :

    1. The user agent may report a warning to the console indicating that the rule needs to be a JSON object.

    2. Return null.

  2. If input has any key other than " source ", " urls ", " where ", " relative_to ", " eagerness ", " referrer_policy ", " tag ", " requires ", " expects_no_vary_search ", or " target_hint ":

    1. The user agent may report a warning to the console indicating that the rule has unrecognized keys.

    2. Return null.

    " target_hint " has no impact on the processing model in this standard. However, implementations of Prerendering Revamped can use it for prerendering rules, and so requiring user agents to fail parsing such rules would be counterproductive. [PRERENDERING-REVAMPED] .

  3. Let source be null.

  4. If input [" source "] exists , then set source to input [" source "].

  5. Otherwise, if input [" urls "] exists and input [" where "] does not exist , then set source to " list ".

  6. Otherwise, if input [" where "] exists and input [" urls "] does not exist , then set source to " document ".

  7. If source is neither " list " nor " document ":

    1. The user agent may report a warning to the console indicating that a source could not be inferred or an invalid source was specified.

    2. Return null.

  8. Let urls be an empty list .

  9. Let predicate be null.

  10. If source is " list ":

    1. If input [" where "] exists :

      1. The user agent may report a warning to the console indicating that there were conflicting sources for this rule.

      2. Return null.

    2. If input [" relative_to "] exists :

      1. If input [" relative_to "] is neither " ruleset " nor " document ":

        1. The user agent may report a warning to the console indicating that the supplied relative-to value was invalid.

        2. Return null.

      2. If input [" relative_to "] is " document ", then set baseURL to document 's document base URL .

    3. If input [" urls "] does not exist or is not a list :

      1. The user agent may report a warning to the console indicating that the supplied URL list was invalid.

      2. Return null.

    4. For each urlString of input [" urls "]:

      1. If urlString is not a string:

        1. The user agent may report a warning to the console indicating that the supplied URL must be a string.

        2. Return null.

      2. Let parsedURL be the result of URL parsing urlString with baseURL .

      3. If parsedURL is failure, or parsedURL 's scheme is not an HTTP(S) scheme :

        1. The user agent may report a warning to the console indicating that the supplied URL string was unparseable.

        2. Continue .

      4. Append parsedURL to urls .

  11. If source is " document ":

    1. If input [" urls "] or input [" relative_to "] exists :

      1. The user agent may report a warning to the console indicating that there were conflicting sources for this rule.

      2. Return null.

    2. If input [" where "] does not exist , then set predicate to a document rule conjunction whose clauses is an empty list .

      Such a predicate will match all links.

    3. Otherwise, set predicate to the result of parsing a document rule predicate given input [" where "], document , and baseURL .

    4. If predicate is null, then return null.

  12. Let eagerness be " immediate " if source is " list "; otherwise, " conservative ".

  13. If input [" eagerness "] exists :

    1. If input [" eagerness "] is not a speculation rule eagerness :

      1. The user agent may report a warning to the console indicating that the eagerness was invalid.

      2. Return null.

    2. Set eagerness to input [" eagerness "].

  14. Let referrerPolicy be the empty string.

  15. If input [" referrer_policy "] exists :

    1. If input [" referrer_policy "] is not a referrer policy :

      1. The user agent may report a warning to the console indicating that the referrer policy was invalid.

      2. Return null.

    2. Set referrerPolicy to input [" referrer_policy "].

  16. Let tags be an empty ordered set .

  17. If rulesetLevelTag is not null, then append rulesetLevelTag to tags .

  18. If input [" tag "] exists :

    1. If input [" tag "] is not a speculation rule tag :

      1. The user agent may report a warning to the console indicating that the tag was invalid.

      2. Return null.

    2. Append input [" tag "] to tags .

  19. If tags is empty , then append null to tags .

  20. Assert : tags 's size is either 1 or 2.

  21. Let requirements be an empty ordered set .

  22. If input [" requires "] exists :

    1. If input [" requires "] is not a list :

      1. The user agent may report a warning to the console indicating that the requirements were not understood.

      2. Return null.

    2. For each requirement of input [" requires "]:

      1. If requirement is not a speculation rule requirement :

        1. The user agent may report a warning to the console indicating that the requirement was not understood.

        2. Return null.

      2. Append requirement to requirements .

  23. Let noVarySearchHint be the default URL search variance .

  24. If input [" expects_no_vary_search "] exists :

    1. If input [" expects_no_vary_search "] is not a string :

      1. The user agent may report a warning to the console indicating that the ` No-Vary-Search ` hint was invalid.

      2. Return null.

    2. Set noVarySearchHint to the result of parsing a URL search variance given input [" expects_no_vary_search "].

  25. Return a speculation rule with:

    URLs
    urls
    predicate
    predicate
    eagerness
    eagerness
    referrer policy
    referrerPolicy
    tags
    tags
    requirements
    requirements
    No-Vary-Search hint
    noVarySearchHint

To parse a document rule predicate given a value input , a Document document , and a URL baseURL :

  1. If input is not a map :

    1. The user agent may report a warning to the console indicating that the document rule predicate was invalid.

    2. Return null.

  2. If input does not contain exactly one of " and ", " or ", " not ", " href_matches ", or " selector_matches ":

    1. The user agent may report a warning to the console indicating that the document rule predicate was empty or ambiguous.

    2. Return null.

  3. Let predicateType be the single key found in the previous step.

  4. If predicateType is " and " or " or ":

    1. If input has any key other than predicateType :

      1. The user agent may report a warning to the console indicating that the document rule predicate had unexpected extra options.

      2. Return null.

    2. If input [ predicateType ] is not a list :

      1. The user agent may report a warning to the console indicating that the document rule predicate had an invalid clause list.

      2. Return null.

    3. Let clauses be an empty list .

    4. For each rawClause of input [ predicateType ]:

      1. Let clause be the result of parsing a document rule predicate given rawClause , document , and baseURL .

      2. If clause is null, then return null.

      3. Append clause to clauses .

    5. If predicateType is " and ", then return a document rule conjunction whose clauses is clauses .

    6. Return a document rule disjunction whose clauses is clauses .

  5. If predicateType is " not ":

    1. If input has any key other than " not ":

      1. The user agent may report a warning to the console indicating that the document rule predicate had unexpected extra options.

      2. Return null.

    2. Let clause be the result of parsing a document rule predicate given input [ predicateType ], document , and baseURL .

    3. If clause is null, then return null.

    4. Return a document rule negation whose clause is clause .

  6. If predicateType is " href_matches ":

    1. If input has any key other than " href_matches " or " relative_to ":

      1. The user agent may report a warning to the console indicating that the document rule predicate had unexpected extra options.

      2. Return null.

    2. If input [" relative_to "] exists :

      1. If input [" relative_to "] is neither " ruleset " nor " document ":

        1. The user agent may report a warning to the console indicating that the supplied relative-to value was invalid.

        2. Return null.

      2. If input [" relative_to "] is " document ", then set baseURL to document 's document base URL .

    3. Let rawPatterns be input [" href_matches "].

    4. If rawPatterns is not a list , then set rawPatterns to « rawPatterns ».

    5. Let patterns be an empty list .

    6. For each rawPattern of rawPatterns :

      1. Let pattern be the result of building a URL pattern from an Infra value given rawPattern and baseURL . If this step throws an exception, catch the exception and set pattern to null.

      2. If pattern is null:

        1. The user agent may report a warning to the console indicating that the supplied URL pattern was invalid.

        2. Return null.

      3. Append pattern to patterns .

    7. Return a document rule URL pattern predicate whose patterns is patterns .

  7. If predicateType is " selector_matches ":

    1. If input has any key other than " selector_matches ":

      1. The user agent may report a warning to the console indicating that the document rule predicate had unexpected extra options.

      2. Return null.

    2. Let rawSelectors be input [" selector_matches "].

    3. If rawSelectors is not a list , then set rawSelectors to « rawSelectors ».

    4. Let selectors be an empty list .

    5. For each rawSelector of rawSelectors :

      1. Let parsedSelectorList be failure.

      2. If rawSelector is a string, then set parsedSelectorList to the result of parsing a selector given rawSelector .

      3. If parsedSelectorList is failure:

        1. The user agent may report a warning to the console indicating that the supplied selector list was invalid.

        2. Return null.

      4. For each selector of parsedSelectorList , append selector to selectors .

    6. Return a document rule selector predicate whose selectors is selectors .

  8. Assert : this step is never reached, as one of the previous branches was taken.

7.6.1.3 Processing model

A speculative load candidate is a struct with the following items :

A prefetch candidate is a speculative load candidate with the following additional item :

A prefetch IP anonymization policy is either null or a cross-origin prefetch IP anonymization policy .

A cross-origin prefetch IP anonymization policy is a struct whose single item is its origin , an origin .


A speculative load candidate candidateA is redundant with another speculative load candidate candidateB if the following steps return true:

  1. If candidateA 's No-Vary-Search hint is not equal to candidateB 's No-Vary-Search hint , then return false.

  2. If candidateA 's URL is not equivalent modulo search variance to candidateB 's URL given candidateA 's No-Vary-Search hint , then return false.

  3. Return true.

The requirement that the No-Vary-Search hints be equivalent is somewhat strict. It means that some cases which could theoretically be treated as matching, are not treated as such. Thus, redundant speculative loads could happen.

However, allowing more lenient matching makes the check no longer an equivalence relation, and producing such matches would require an implementation strategy that does a full comparison, instead of a simpler one using normalized URL keys. This is in line with the best practices for server operators, and attendant HTTP cache implementation notes, in No Vary Search § 6 Comparing .

In practice, we do not expect this to cause redundant speculative loads, since server operators and the corresponding speculation rules-writing web developers will follow best practices and use static ` No-Vary-Search ` header values/speculation rule hints.

Consider three speculative load candidates :

  1. A has a URL of https://example.com?a=1&b=1 and a No-Vary-Search hint parsed from params=("a") .

  2. B has a URL of https://example.com?a=2&b=1 and a No-Vary-Search hint parsed from params=("b") .

  3. C has a URL of https://example.com?a=2&b=2 and a No-Vary-Search hint parsed from params=("a") .

With the current definition of redundant with , none of these candidates are redundant with each other. A speculation rule set which contained all three could cause three separate speculative loads.

A definition which did not require equivalent No-Vary-Search hints could consider A and B to match (using A 's No-Vary-Search hint ), and B and C to match (using B 's No-Vary-Search hint ). But it could not consider A and C to match, so it would not be transitive, and thus not an equivalence relation.


Every Document has speculation rule sets , a list of speculation rule sets , initially empty.

Every Document has a consider speculative loads microtask queued , a boolean, initially false.

To consider speculative loads for a Document document :

  1. If document 's node navigable is not a top-level traversable , then return.

    Supporting speculative loads into child navigables has some complexities and is not currently defined. It might be possible to define it in the future.

  2. If document 's consider speculative loads microtask queued is true, then return.

  3. Set document 's consider speculative loads microtask queued to true.

  4. Queue a microtask given document to run the following steps:

    1. Set document 's consider speculative loads microtask queued to false.

    2. Run the inner consider speculative loads steps for document .

In addition to the call sites explicitly given in this standard:

In this standard, every call to consider speculative loads is given just a Document , and the algorithm re-computes all possible candidates in a stateless way. A real implementation would likely cache previous computations, and pass along information from the call site to make updates more efficient. For example, if an a element's href attribute is changed, that specific element could be passed along in order to update only the related speculative load candidate .

Note that because of how consider speculative loads queues a microtask, by the time the inner consider speculative loads steps are run, multiple updates (or cancelations ) might be processed together.

The inner consider speculative loads steps for a Document document are:

  1. If document is not fully active , then return.

  2. Let prefetchCandidates be an empty list .

  3. For each ruleSet of document 's speculation rule sets :

    1. For each rule of ruleSet 's prefetch rules :

      1. Let anonymizationPolicy be null.

      2. If rule 's requirements contains " anonymous-client-ip-when-cross-origin ", then set anonymizationPolicy to a cross-origin prefetch IP anonymization policy whose origin is document 's origin .

      3. For each url of rule 's URLs :

        1. Let referrerPolicy be the result of computing a speculative load referrer policy given rule and null.

        2. Append a new prefetch candidate with

          URL
          url
          No-Vary-Search hint
          rule 's No-Vary-Search hint
          eagerness
          rule 's eagerness
          referrer policy
          referrerPolicy
          tags
          rule 's tags
          anonymization policy
          anonymizationPolicy

          to prefetchCandidates .

      4. If rule 's predicate is not null:

        1. Let links be the result of finding matching links given document and rule 's predicate .

        2. For each link of links :

          1. Let referrerPolicy be the result of computing a speculative load referrer policy given rule and link .

          2. Append a new prefetch candidate with

            URL
            link 's url
            No-Vary-Search hint
            rule 's No-Vary-Search hint
            eagerness
            rule 's eagerness
            referrer policy
            referrerPolicy
            tags
            rule 's tags
            anonymization policy
            anonymizationPolicy

            to prefetchCandidates .

  4. For each prefetchRecord of document 's prefetch records :

    1. If prefetchRecord 's source is not " speculation rules ", then continue .

    2. Assert : prefetchRecord 's state is not " canceled ".

    3. If prefetchRecord is not still being speculated given prefetchCandidates , then cancel and discard prefetchRecord given document .

  5. Let prefetchCandidateGroups be an empty list .

  6. For each candidate of prefetchCandidates :

    1. Let group be « candidate ».

    2. Extend group with all items in prefetchCandidates , apart from candidate itself, which are redundant with candidate and whose eagerness is at least as eager as candidate 's eagerness .

    3. If prefetchCandidateGroups contains another group whose items are the same as group , ignoring order, then continue .

    4. Append group to prefetchCandidateGroups .

    The following speculation rules generate two redundant prefetch candidates :

    {
      "prefetch": [
        {
          "tag": "a",
          "urls": ["next.html"]
        },
        {
          "tag": "b",
          "urls": ["next.html"],
          "referrer_policy": "no-referrer"
        }
      ]
    }
    
    

    This step will create a single group containing them both, in the given order. (The second pass through will not create a group, since its contents would be the same as the first group, just in a different order.) This means that if the user agent chooses to execute the "may" step below to enact the group, it will enact the first candidate, and ignore the second. Thus, the request will be made with the default referrer policy , instead of using " no-referrer ".

    However, the collect tags from speculative load candidates algorithm will collect tags from both candidates in the group, so the ` Sec-Speculation-Tags ` header value will be ` "a", "b" `. This indicates to server operators that either rule could have caused the speculative load.

  7. For each group of prefetchCandidateGroups :

    1. The user agent may run the following steps:

      1. Let prefetchCandidate be group [0].

      2. Let tagsToSend be the result of collecting tags from speculative load candidates given group .

      3. Let prefetchRecord be a new prefetch record with

        source
        " speculation rules "
        URL
        prefetchCandidate 's URL
        No-Vary-Search hint
        prefetchCandidate 's No-Vary-Search hint
        referrer policy
        prefetchCandidate 's referrer policy
        anonymization policy
        prefetchCandidate 's anonymization policy
        tags
        tagsToSend
      4. Start a referrer-initiated navigational prefetch given prefetchRecord and document .

      When deciding whether to execute this "may" step, user agents should consider prefetchCandidate 's eagerness , in accordance to the current behavior of the user and the definitions of speculation rule eagerness .

      prefetchCandidate 's No-Vary-Search hint can also be useful in implementing the heuristics defined for the speculation rule eagerness values. For example, a user hovering of a link whose URL is equivalent modulo search variance to prefetchCandidate 's URL given prefetchCandidate 's No-Vary-Search hint could indicate to the user agent that performing this step would be useful.

      When deciding whether to execute this "may" step, user agents should prioritize user preferences (express or implied, such as data-saver or battery-saver modes) over the eagerness supplied by the web developer.

To compute a speculative load referrer policy given a speculation rule rule and an a element, area element, or null link :

  1. If rule 's referrer policy is not the empty string, then return rule 's referrer policy .

  2. If link is null, then return the empty string.

  3. Return link 's hyperlink referrer policy .

To collect tags from speculative load candidates given a list of speculative load candidates candidates :

  1. Let tags be an empty ordered set .

  2. For each candidate of candidates :

    1. For each tag of candidate 's tags : append tag to tags .

  3. Sort in ascending order tags , with tagA being less than tagB if tagA is null, or if tagA is code unit less than tagB .

  4. Return tags .


To find matching links given a Document document and a document rule predicate predicate :

  1. Let links be an empty list .

  2. For each shadow-including descendant descendant of document , in shadow-including tree order :

    1. If descendant is not an a or area element with an href attribute, then continue .

    2. If descendant is not being rendered or is part of skipped contents , then continue .

      Such links, though present in document , aren't available for the user to interact with, and thus are unlikely to be good candidates. In addition, they might not have their style or layout computed, which might make selector matching less efficient in user agents which skip some or all of that work for these elements.

    3. If descendant 's url is null, or its scheme is not an HTTP(S) scheme , then continue .

    4. If predicate matches descendant , then append descendant to links .

  3. Return links .

A document rule predicate predicate matches an a or area element el if the following steps return true, switching on predicate 's type:

document rule conjunction
  1. For each clause of predicate 's clauses :

    1. If clause does not match el , then return false.

  2. Return true.

document rule disjunction
  1. For each clause of predicate 's clauses :

    1. If clause matches el , then return true.

  2. Return false.

document rule negation
  1. If predicate 's clause matches el , then return false.

  2. Return true.

document rule URL pattern predicate
  1. For each pattern of predicate 's patterns :

    1. If performing a match given pattern and el 's url gives a non-null value, then return true.

  2. Return false.

document rule selector predicate
  1. For each selector of predicate 's selectors :

    1. If performing a match given selector and el with the scoping root set to el 's root returns success, then return true.

  2. Return false.


Speculation rules features use the speculation rules task source , which is a task source .

Because speculative loading is generally less important than processing tasks for the purpose of the current document, implementations might give tasks enqueued here an especially low priority.

For now, the navigational prefetching process is defined in the Prefetch specification. Moving it into this standard is tracked in issue #11123 . [PREFETCH]

This standard refers to the following concepts defined there:

7.6.3 The ` Speculation-Rules ` header

The ` Speculation-Rules ` HTTP response header allows the developer to request that the user agent fetch and apply a given speculation rule set to the current Document . It is a structured header whose value must be a list of strings that are all valid URL strings .

To process the ` Speculation-Rules ` header given a Document document and a response response :

  1. Let parsedList be the result of getting a structured field value given ` Speculation-Rules ` and " list " from response 's header list .

  2. If parsedList is null, then return.

  3. For each item of parsedList :

    1. If item is not a string , then continue .

    2. Let url be the result of URL parsing item with document 's document base URL .

    3. If url is failure, then continue .

    4. In parallel :

      1. Optionally, wait for an implementation-defined amount of time.

        This allows the implementation to prioritize other work ahead of loading speculation rules, as especially during Document creation and header processing, there are often many more important things going on.

      2. Queue a global task on the speculation rules task source given document 's relevant global object to perform the following steps:

        1. Let request be a new request whose URL is url , destination is " speculationrules ", and mode is " cors ".

        2. Fetch request with the following processResponseConsumeBody steps given response response and null, failure, or a byte sequence bodyBytes :

          1. If bodyBytes is null or failure, then abort these steps.

          2. If response 's status is not an ok status , then abort these steps.

          3. If the result of extracting a MIME type from response 's header list does not have an essence of " application/speculationrules+json ", then abort these steps.

          4. Let bodyText be the result of UTF-8 decoding bodyBytes .

          5. Let ruleSet be the result of parsing a speculation rule set string given bodyText , document , and response 's URL . If this throws an exception, then abort these steps.

          6. Append ruleSet to document 's speculation rule sets .

          7. Consider speculative loads for document .

7.6.4 The ` Sec-Speculation-Tags ` header

The ` Sec-Speculation-Tags ` HTTP request header specifies the web developer-provided tags associated with the speculative navigation request. It can also be used to distinguish speculative navigation requests from speculative subresource requests, since ` Sec-Purpose ` can be sent by both categories of requests.

The header is a structured header whose value must be a list . The list can contain either token or string values. String values represent developer-provided tags, whereas token values represent predefined tags. As of now, the only predefined tag is null , which indicates a speculative navigation request with no developer-defined tag.

7.6.5 Security considerations

7.6.5.1 Cross-site requests

Speculative loads can be initiated by web pages to cross-site destinations. However, because such cross-site speculative loads are always done without credentials , as explained below , ambient authority is limited to requests that are already possible via other mechanisms on the platform.

The ` Speculation-Rules ` header can also be used to issue requests, for JSON documents whose body will be parsed as a speculation rule set string . However, they use the " same-origin " credentials mode , the " cors " mode , and responses which do not use the application/speculationrules+json MIME type essence are ignored, so they are not useful in mounting attacks.

7.6.5.2 Injected content

Because links in a document can be selected for speculative loading via document rule predicates , developers need to be cautious if such links might contain user-generated markup. For example, if the href of a link can be entered by one user and displayed to all other users, a malicious user might choose a value like " /logout ", causing other users' browsers to automatically log out of the site when that link is speculatively loaded. Using a document rule selector predicate to exclude such potentially-dangerous links, or using a document rule URL pattern predicate to allowlist known-safe links, are useful techniques in this regard.

As with all uses of the script element, developers need to be cautious about inserting user-provided content into <script type=speculationrules> 's child text content . In particular, the insertion of an unescaped closing </script> tag could be used to break out of the script element context and inject attacker-controlled markup.

The <script type=speculationrules> feature causes activity in response to content found in the document, so it is worth considering the options open to an attacker able to inject unescaped HTML. Such an attacker is already able to inject JavaScript or iframe elements. Speculative loads are generally less dangerous than arbitrary script execution. However, the use of document rule predicates could be used to speculatively load links in the document, and the existence of those loads could provide a vector for exfiltrating information about those links. Defense-in-depth against this possibility is provided by Content Security Policy. In particular, the script-src directive can be used to restrict the parsing of speculation rules script elements, and the default-src directive applies to navigational prefetch requests arising from such speculation rules. Additional defense is provided by the requirement that speculative loads are only performed to potentially-trustworthy URLs , so an on-path attacker would only have access to metadata and traffic analysis, and could not see the URLs directly. [CSP]

It's generally not expected that user-generated content will be added as arbitrary response headers: server operators are already going to encounter significant trouble if this is possible. It is therefore unlikely that the ` Speculation-Rules ` header meaningfully expands the XSS attack surface. For this reason, Content Security Policy does not apply to the loading of rule sets via that header.

7.6.5.3 IP anonymization

This standard allows developers to request that navigational prefetches are performed using IP anonymization technology provided by the user agent. The details of this anonymization are not specified, but some general security principles apply.

To the extent IP anonymization is implemented using a proxy service, it is advisable to minimize the information available to the service operator and other entities on the network path. This likely involves, at a minimum, the use of TLS for the connection.

Site operators need to be aware that, similar to virtual private network (VPN) technology, the client IP address seen by the HTTP server might not exactly correspond to the user's actual network provider or location, and a traffic for multiple distinct subscribers could originate from a single client IP address. This can affect site operators' security and abuse prevention measures. IP anonymization measures might make an effort to use an egress IP address which has a similar geolocation or is located in the same jurisdiction as the user, but any such behavior is particular to the user agent and not guaranteed.

7.6.6 Privacy considerations

7.6.6.1 Heuristics and optionality

The consider speculative loads algorithm contains a crucial "may" step, which encourages user agents to start referrer-initiated navigational prefetches based on a combination of the speculation rule eagerness and other features of the user's environment. Because it can be observable to the document whether speculative loads are performed, user agents must take care to protect privacy when making such decisions—for instance by only using information which is already available to the origin. If these heuristics depend on any persistent state, that state must be erased whenever the user erases other site data. If the user agent automatically clears other site data from time to time, it must erase such persistent state at the same time.

The use of origin instead of site here is intentional. Although same-site origins are generally allowed to coordinate if they wish, the web's security model is premised on preventing origins from accessing the data of other origins, even same-site ones. Thus, the user agent needs to be sure not to leak such data unintentionally across origins, not just across sites.

Examples of inputs which would be already known to the document:

Examples of persistent data related to the origin (which the origin could have gathered itself) but which must be erased according to user intent:

Examples of device information which might be valuable in deciding whether speculative loading is appropriate, but which needs to be considered as part of the user agent's overall privacy posture because it can make the user more identifiable across origins:

7.6.6.2 State partitioning

The start a referrer-initiated navigational prefetch algorithm is designed to ensure that the HTTP requests that it issues behave consistently with how user agents partition credentials according to storage keys . This property is maintained even for cross-partition prefetches, as follows.

If a future navigation using a prefetched response would load a document in the same partition, then at prefetch time, the partitioned credentials can be sent, as they can with subresource requests and scripted fetches. If such a future navigation would instead load a document in another partition, it would be inconsistent with the partitioning scheme to use partitioned credentials for the destination partition (since this would cross the boundary between partitions without a top-level navigation) and also inconsistent to use partitioned credentials within the originating partition (since this would result in the user seeing a document with different state than a non-prefetched navigation). Instead, a third, initially empty, partition is used for such requests. These requests therefore send along no credentials from either partition. However, the resulting prefetched response body constructed using this initially-empty partition can only be used if, at activation time, the destination partition contains no credentials.

This is somewhat similar to the behavior of only sending such prefetch requests if the destination partition is known ahead of time to not contain credentials. However, to avoid such behavior being used a way of probing for the presence of credentials, instead such prefetch requests are always completed, and in the case of conflicting credentials, their results are not used.

Redirects are possible between these two types of requests. A redirect from a same- to cross-partition URL could contain information derived from partitioned credentials in the originating partition; however, this is equivalent to the originating document fetching the same-partition URL itself and then issuing a request for the cross-partition URL. A redirect from a cross- to same-origin URL could carry credentials from the isolated partition, but since this partition has no prior state this does not enable tracking based on the user's prior browsing activity on that site, and the document could construct the same state by issuing uncredentialed requests itself.

7.6.6.3 Identity joining

Speculative loads provide a mechanism through which HTTP requests for later top-level navigation can be made without a user gesture. It is natural to ask whether it is possible for two coordinating sites to connect user identities.

Since existing credentials for the destination site are not sent (as explained in the previous section), that site is limited in its ability to identify the user before navigation in a similar way to if the referrer site had simply used fetch() to make an uncredentialed request. Upon navigation, this becomes similar to ordinary navigation (e.g., by clicking a link that was not speculatively loaded).

To the extent that user agents attempt to mitigate identity joining for ordinary fetches and navigations, they can apply similar mitigations to speculatively-loaded navigations.

7.7 The ` X-Frame-Options ` header

Headers/X-Frame-Options

Support in all current engines.

Firefox 4+ Safari 4+ Chrome 4+
Opera 10.5+ Edge 79+
Edge (Legacy) 12+ Internet Explorer 8+
Firefox Android Yes Safari iOS Yes Chrome Android Yes WebView Android ? Samsung Internet ? Opera Android ?

The ` X-Frame-Options ` HTTP response header is a way of controlling whether and how a Document may be loaded inside of a child navigable . For sites using CSP, the frame-ancestors directive provides more granular control over the same situations. It was originally defined in HTTP Header Field X-Frame-Options , but the definition and processing model here supersedes that document. [CSP] [RFC7034]

In particular, HTTP Header Field X-Frame-Options specified an ` ALLOW-FROM ` variant of the header, but that is not to be implemented.

Per the below processing model, if both a CSP frame-ancestors directive and an ` X-Frame-Options ` header are used in the same response , then ` X-Frame-Options ` is ignored.

For web developers and conformance checkers, its value ABNF is:


X-Frame-Options
=
"DENY"
/
"SAMEORIGIN"

To check a navigation response's adherence to ` X-Frame-Options ` , given a response response , a navigable navigable , a CSP list cspList , and an origin destinationOrigin :

  1. If navigable is not a child navigable , then return true.

  2. For each policy of cspList :

    1. If policy 's disposition is not " enforce ", then continue .

    2. If policy 's directive set contains a frame-ancestors directive, then return true.

  3. Let rawXFrameOptions be the result of getting, decoding, and splitting ` X-Frame-Options ` from response 's header list .

  4. Let xFrameOptions be a new set .

  5. For each value of rawXFrameOptions , append value , converted to ASCII lowercase , to xFrameOptions .

  6. If xFrameOptions 's size is greater than 1, and xFrameOptions contains any of " deny ", " allowall ", or " sameorigin ", then return false.

    The intention here is to block any attempts at applying ` X-Frame-Options ` which were trying to do something valid, but appear confused.

    This is the only impact of the legacy ` ALLOWALL ` value on the processing model.

  7. If xFrameOptions 's size is greater than 1, then return true.

    This means it contains multiple invalid values, which we treat the same way as if the header was omitted entirely.

  8. If xFrameOptions [0] is " deny ", then return false.

  9. If xFrameOptions [0] is " sameorigin ", then:

    1. Let containerDocument be navigable 's container document .

    2. While containerDocument is not null:

      1. If containerDocument 's origin is not same origin with destinationOrigin , then return false.

      2. Set containerDocument to containerDocument 's container document .

  10. Return true.

    If we've reached this point then we have a lone invalid value (which could potentially be one the legacy ` ALLOWALL ` or ` ALLOW-FROM ` forms). These are treated as if the header were omitted entirely.


The following table illustrates the processing of various values for the header, including non-conformant ones:

` X-Frame-Options ` Valid Result
` DENY ` embedding disallowed
` SAMEORIGIN ` same-origin embedding allowed
` INVALID ` embedding allowed
` ALLOWALL ` embedding allowed
` ALLOW-FROM=https://example.com/ ` embedding allowed (from anywhere)

The following table illustrates how various non-conformant cases involving multiple values are processed:

` X-Frame-Options ` Result
` SAMEORIGIN, SAMEORIGIN ` same-origin embedding allowed
` SAMEORIGIN, DENY ` embedding disallowed
` SAMEORIGIN, ` embedding disallowed
` SAMEORIGIN, ALLOWALL ` embedding disallowed
` SAMEORIGIN, INVALID ` embedding disallowed
` ALLOWALL, INVALID ` embedding disallowed
` ALLOWALL, ` embedding disallowed
` INVALID, INVALID ` embedding allowed

The same results are obtained whether the values are delivered in a single header whose value is comma-delimited, or in multiple headers.

7.8 Text directives and URL fragments

7.8.1 Introduction

This section is non-normative.

Text directives add support for specifying a text snippet in a URL fragment . When navigating to a URL whose fragment contains a text directive , the user agent can quickly emphasize the captured text on the page, bringing it to the user's attention.

The core use case for text directives is to allow URLs to serve as an exact text reference across the web. For example, Wikipedia URLs might link to the exact text they are quoting from a page. Similarly, search engines can serve URLs that embed text directives that direct the user to the content they are looking for in the page, rather than just linking to the top of the page.

This section defines how text directives are parsed , constructed, and "invoked" , which is the process of finding the text encapsulated by the directive, in a document.

With text directives , browsers may implement an option to "Copy URL to this text" when the user engages with various UI components, such as a context menu referring to a text selection. The browser can then generate a URL with the text selection appropriately specified , and the recipient of the URL will have the specified text conveniently indicated. Without text directives, if a user wants to share a passage of text from a page, they would likely just copy and paste the passage, in which case the recipient loses the context of the page.

This section is non-normative.

Page on the web often update and change their content. As such, text directive links may "rot" in that the text content they point to no longer exists on the destination page. This specification attempts to maximize the useful lifetime of text directive links by using the actual text content as the URL payload, and allowing a fallback element-id fragment.

In user sharing use cases, the link is often transient, intended to be used only within a short time of sending. For longer duration use cases, such as references and web page links, text directives are still valuable since they degrade gracefully into an ordinary link. Additionally, the presence of a stale text directive can be useful information to surface to a user, to help them understand the link creator's original intent and that the page content may have changed since the link was created.

See the Generating text fragment directives section for best practices on how to create robust text directive links.

7.8.3 Exposure to script

This section is non-normative.

This section describes how fragment directives are exposed to script and updated across navigations.

When a URL including a fragment directive is written to a session history entry , the directive is extracted from the URL and stored in the entry's directive state member. Importantly, since a Document is populated from a session history entry , its URL will not include fragment directives . Similarly, since the Location object is a representation of the active document 's URL , all getters on it will produce a fragment directive -stripped version of the URL.

In short, this ensures that fragment directives are not exposed to script, and are rather tracked by internal mechanisms alongside script-exposed URLs.

Additionally, since the HashChangeEvent is fired in response to a changed fragment between URLs of session history entries, hashchange will not be fired if a navigation or traversal changes only the fragment directive.

Furthermore, same-document navigations that only change a URL's fragment without specifying a new directive, create new session history entries whose directive state refers to the previous entry's directive state .

Consider the below examples, which help clarify various edge cases of the above implications.


window.location = "https://example.com#page1:~:hello";
console.log(window.location.href); // 'https://example.com#page1'
console.log(window.location.hash); // '#page1'

The initial navigation created a new session history entry. The entry's URL is stripped of the fragment directive: "https://example.com#page1". The entry's directive state value is set to "hello". Since the document is populated from the entry, web APIs don't include the fragment directive in URLs.


location.hash = "page2";
console.log(location.href); // 'https://example.com#page2'

A same document navigation changed only the fragment. This adds a new session history entry in the navigate to a fragment steps. However, since only the fragment changed, the new entry's directive state points to the same state as the first entry, with a value of "bar".


    onhashchange = () => console.assert(false, "hashchange doesn't fire.");
    location.hash = "page2:~:world";
    console.log(location.href); // 'https://example.com#page2'
    onhashchange = null;

A same document navigation changes only the fragment but includes a fragment directive. Since an explicit directive was provided, the new entry includes its own directive state with a value of "fizz".

The hashchange event is not fired since the page-visible fragment is unchanged; only the fragment directive changed. This is because the comparison for hashchange is done on the URLs in the session history entries, where the fragment directive has been removed.


    history.pushState("", "", "page3");
    console.log(location.href); // 'https://example.com/page3'

pushState creates a new session history entry for the same document. However, since the non-fragment URL has changed, this entry has its own directive state with value currently null.

In other cases where a URL is not set to a session history entry, there is no fragment directive stripping.

For URL objects:


let url = new URL('https://example.com#foo:~:bar');
console.log(url.href); // 'https://example.com#foo:~:bar'
console.log(url.hash); // '#foo:~:bar'
document.url = url;
console.log(document.url.href); // 'https://example.com#foo:~:bar'
console.log(document.url.hash); // '#foo:~:bar'

The a or area elements:


<a id='anchor' href="https://example.com#foo:~:bar">Anchor</a>
<script>
  console.log(anchor.href); // 'https://example.com#foo:~:bar'
  console.log(anchor.hash); // '#foo:~:bar'
</script>

7.8.4 Applying directives to a document

This specification intentionally doesn't define what actions a user agent takes to "indicate" a text match. There are different experiences and trade-offs a user agent could make. Some examples of possible actions:

  1. Providing visual emphasis or highlight of the text passage.

  2. Automatically scrolling the passage into view when the page is navigated.

  3. Activating a UA's find-in-page feature on the text passage.

  4. Providing a "Click to scroll to text passage" notification.

  5. Providing a notification when the text passage isn't found in the page.

The choice of action can have implications for user security and privacy. See the security and privacy section for details.


The UA may choose to scroll the text fragment into view as part of the try to scroll to the fragment steps or by some other mechanism; however, it is not required to scroll the match into view.

The UA should visually indicate the matched text in some way such that the user is made aware of the text match, such as with a high-contrast highlight.

The UA should provide to the user some method of dismissing the match, such that the matched text no longer appears visually indicated.

The exact appearance and mechanics of the indication are left as UA-defined. However, the UA must not use any methods observable by author script, such as the Document's selection , to indicate the text match. Doing so could allow attack vectors for content exfiltration.

The UA must not visually indicate any provided context terms.

Since the indicator is not part of the document's content, UAs should consider ways to differentiate it from the page's content as perceived by the user.

The UA could provide an in-product help prompt the first few times the indicator appears to help train the user that it comes from the linking page and is provided by the UA.

7.8.4.1 URLs in UA features

UAs provide a number of consumers for a document's URL (outside of programmatic APIs like window.location ). Examples include a location bar indicating the URL of the currently visible document, or the URL used when a user requests to create a bookmark for the current page.

To avoid user confusion, UAs should be consistent in whether such URLs include the fragment directive . This section provides a default set of recommendations for how UAs can handle these cases.

We provide these as a baseline for consistent behavior; however, as these features don't affect cross-UA interoperability, they are not strict conformance requirements.

Exact behavior is left up to the implementing UA which can have differing constraints or reasons for modifying the behavior. e.g. UAs can allow users to configure defaults or expose UI options so users can choose whether they prefer to include fragment directives in these URLs.

It's also useful to allow UAs to experiment with providing a better experience. E.g. perhaps the UA's displayed URL can elide the text fragment if the user scrolls it out of view?

The general principle is that a URL should include the fragment directive only while the visual indicator is visible (i.e., not dismissed). If the user dismisses the indicator, then the URL should reflect this by removing the fragment directive .

If the URL includes a text fragment but a match wasn't found in the current page, the UA may choose to omit it from the exposed URL.

A text fragment that isn't found on the page can be useful information to surface to a user to indicate that the page has changed since the link was created.

However, it's unlikely to be useful to the user in a bookmark.

A few common examples are provided in the subsections below.

We use "text fragment" and "fragment directive" interchangeably here as text fragments are assumed to be the only kind of directive. If additional directives are added in the future, the UX in these cases will have to be re-evaluated separately for new directive types.

7.8.4.1.1 Location bar

The location bar's URL should include a text fragment while it is visually indicated. The fragment directive should be stripped from the location bar URL when the user dismisses the indication.

It is recommended that the text fragment be displayed in the location bar's URL even if a match wasn't located in the document.

7.8.4.1.2 Bookmarks

Many UAs provide a "bookmark" feature allowing users to store a convenient link to the current page in the UA's interface.

A newly created bookmark should, by default, include the fragment directive in the URL if, and only if, a match was found and the visual indicator hasn't been dismissed.

Navigating to a URL from a bookmark should process a fragment directive as if it were navigated to in a typical navigation.

7.8.4.1.3 Sharing

Some UAs provide a method for users to share the current page with others, typically by providing the URL to another app or messaging service.

When providing a URL in these situations, it should include the fragment directive if, and only if, a match was found and the visual indicator hasn't been dismissed.

7.8.5 Supporting concepts

To avoid compatibility issues with usage of existing URL fragments, this spec introduces the concept of a fragment directive . It is the portion of the URL fragment that follows the fragment directive delimiter and may be null if the delimiter does not appear in the fragment.

The fragment directive delimiter is the string " :~: ", that is the three consecutive code points U+003A (:), U+007E (~), U+003A (:).

The fragment directive is part of the URL fragment. This means it always appears after a U+0023 (#) code point in a URL.

To add a fragment directive to a URL like " https://example.com ", a fragment is first appended to the URL: " https://example.com#:~:text=foo ".


The fragment directive is parsed and processed into individual directives , which are instructions to the user agent to perform some action. Multiple directives may appear in the fragment directive.

The only directive introduced in this spec is the text directive but others could be added in the future.

" https://example.com#:~:text=foo&text=bar&unknownDirective " Contains 2 text directives and one unknown directive.

To prevent impacting page operation, it is stripped from script-accessible APIs to prevent interaction with author script. This also ensures future directives can be added without web compatibility risk.

A text directive is a kind of directive representing a range of text to be indicated to the user. It is a struct consisting of four strings:

start

a non-empty string

end

null or a non-empty string

prefix

null or a non-empty string

suffix

null or a non-empty string


Each Document has a pending text directives which is either a list of text directives or null, initially null.

7.8.6 Syntax

A text directive is specified in the fragment directive with the following format:

#:~:text=[prefix-,]start[,end][,-suffix]
          context  |--match--|  context

(Square brackets indicate an optional parameter) .

The text parameters are percent-decoded before matching. Dash (-), ampersand (&), and comma (,) characters in text parameters are percent-encoded to avoid being interpreted as part of the text directive syntax.

The only required parameter is " start ". If only " start " is specified, then the first instance of this exact text string is the target text.

" #:~:text=an%20example%20text%20fragment " indicates that the exact text " an example text fragment" is the target text.

If the " end " parameter is also specified, then the text directive refers to a range of text in the page. The target text range is the text range starting at the first instance of " start ", until the first instance of " end " that appears after " start ". This is equivalent to specifying the entire text range in the " start " parameter, but allows the URL to avoid being bloated with a long text directive.

" #:~:text=an%20example,text%20fragment " indicates that the first instance of " an example " until the following first instance of "text fragment" is the target text.

The other two optional parameters are context terms. They are specified by the dash (-) character succeeding the prefix and preceding the suffix, to differentiate them from the " start " and " end " parameters, as any combination of optional parameters can be specified.

Context terms are used to disambiguate the target text fragment. The context terms can specify the text immediately before (prefix) and immediately after (suffix) the text fragment, allowing for whitespace.

While a match succeeds only if the context terms surround the target text fragment, any amount of whitespace is allowed between context terms and the text fragment. This allows context terms to cross element boundaries, for example if the target text fragment is at the beginning of a paragraph and needs disambiguation by the previous element's text as a prefix.

The context terms are not part of the targeted text fragment and are not visually indicated.

" #:~:text=this%20is-,an%20example,-text%20fragment " would match to " an example " in " this is an example text fragment ", but not match to " an example " in " here is an example text ".

7.8.7 Parsing and processing model

7.8.7.1 Parsing

To parse a text directive , on a string text directive value , perform the following steps. They return a text directive -or-null.

This algorithm takes a single text directive value string as input (e.g., "prefix-,foo,bar") and attempts to parse the string into the components of the directive (e.g., ("prefix", "foo", "bar", null)). See [[#syntax]] for the what each of these components means and how they're used.

  1. Let prefix , suffix , start , and end each be null.

  2. Assert : text directive value is an ASCII string with no code points in the fragment percent-encode set and no instances of U+0026 AMPERSAND character (&).

  3. Let tokens be a list of strings that result from strictly splitting text directive value on U+002C (,).

  4. If tokens has size less than 1 or greater than 4, then return null.

  5. If the first item of tokens ends with U+002D (-):

    1. Set prefix to the substring of tokens [0] from 0 with length tokens [0]'s length - 1.

    2. Remove the first item of tokens .

    3. If prefix is the empty string or contains any instances of U+002D (-), then return null.

    4. If tokens is empty , then return null.

  6. If the last item of tokens starts with U+002D (-):

    1. Set suffix to the substring of the last item of tokens from 1 to the end of the string.

    2. Remove the last item of tokens .

    3. If suffix is the empty string or contains any instances of U+002D (-), then return null.

    4. If tokens is empty , then return null.

  7. If tokens has size greater than 2, then return null.

  8. Assert : tokens has size 1 or 2.

  9. Set start to the first item in tokens .

  10. Remove the first item of tokens .

  11. If start is the empty string or contains any instances of U+002D (-), then return null.

  12. If tokens is not empty :

    1. Set end to the first item in tokens .

    2. If end is the empty string or contains any instances of U+002D (-), return null.

  13. Return a new text directive , with

    start
    The percent-decoding of start
    end
    The percent-decoding of end
    prefix
    The percent-decoding of prefix
    suffix
    The percent-decoding of suffix

To percent-decode a text directive term given an ASCII string -or-null term , perform the following steps. They return a string -or-null.

  1. If term is null, then return null.

  2. Let decoded bytes be the result of percent-decoding term .

  3. Return the result of running UTF-8 decode without BOM on decoded bytes .

To parse the fragment directive , given an ASCII string fragment directive , perform the following steps. They return a list of text directives parsed from fragment directive .

  1. Let directives be the result of strictly splitting fragment directive on U+0026 AMPERSAND character (&).

  2. Let output be an empty list .

  3. For each directive in directives :

    1. If directive does not start with " text= ", then continue

    2. Let text directive value be the code point substring from 5 to the end of directive .

      Note: this might be the empty string.

    3. Let parsed text directive be the result of parsing text directive value .

    4. If parsed text directive is non-null, append it to output .

  4. Return output .

Add a helper algorithm for removing and returning a fragment directive string from a URL :

This algorithm makes a URL's fragment end at the fragment directive delimiter . The returned fragment directive includes all characters that follow the delimiter but does not include the delimiter.

TODO: If a URL's fragment ends with ':~:' (i.e., empty directive), this will return null which is treated as the URL not specifying an explicit directive (and avoids clobbering an existing one. But maybe in this case we should return the empty string? That way a page can explicitly clear directives/highlights by navigating/pushState to '#:~:'.

To remove the fragment directive from a URL url , run these steps:

  1. Let raw fragment be equal to url 's fragment .

  2. Let fragment directive be null.

  3. If raw fragment is non-null and contains the fragment directive delimiter as a substring:

    1. Let position be the position variable pointing to the first code point of the first instance, if one exists, of the fragment directive delimiter in raw fragment , or past the end of raw fragment otherwise.

    2. Let new fragment be the code point substring by positions of raw fragment from the start of raw fragment to position .

    3. Advance position by the code point length of the fragment directive delimiter .

    4. If position does not point past the end of raw fragment , then set fragment directive to the code point substring to the end of the string raw fragment starting from position .

    5. Set url 's fragment to new fragment .

  4. Return fragment directive .

https://example.org/#test:~:text=foo will be parsed such that the fragment is the string "test" and the fragment directive is the string "text=foo".

7.8.7.2 Finding and invoking text directives

This section outlines several algorithms and definitions that specify how to turn a full fragment directive string into a list of Range objects.

At a high level, we take a fragment directive string that looks like this:


    text=prefix-,foo&unknown&text=bar,baz

We break this up into the individual text directives:


    text=prefix-,foo
    text=bar,baz

For each text directive, we perform a search in the document for the first instance of rendered text that matches the restrictions in the directive. Each search is independent of any others; that is, the result is the same regardless of how many other directives are provided or their match result.

If a directive successfully matches to text in the document, it returns a range indicating the match in the document. The invoke text directives steps are the high level API provided by this section. These return a list of ranges that were matched by the individual directive matching steps, in the order the directives were specified in the fragment directive string.

If a directive was not matched, it does not add an item to the returned list.

To invoke text directives , given a list of text directives text directives and a Document document , perform these steps:

  1. Let ranges be a list of Range objects, initially empty.

  2. For each directive of text directives , if the result of running find a range from a text directive given directive and document is non-null, then append directive to ranges .

  3. Return ranges .


Maybe describe what this next set of algorithms and primitives do.

This algorithm takes as input a successfully parsed text directive and a document in which to search. It returns a [=range=] that points to the first text passage within the document that matches the searched-for text and satisfies the surrounding context. Returns null if no such passage exists.

[=text directive/end=] can be null. If omitted, this is an "exact" search and the returned [=range=] will contain a string exactly matching [=text directive/start=]. If [=text directive/end=] is provided, this is a "range" search; the returned [=range=] will start with [=text directive/start=] and end with [=text directive/end=]. In the normative text below, we'll call a text passage that matches the provided [=text directive/start=] and [=text directive/end=], regardless of which mode we're in, the "matching text". Either or both of [=text directive/prefix=] and [=text directive/suffix=] can be null, in which case context on that side of a match is not checked. E.g. If [=text directive/prefix=] is null, text is matched without any requirement on what text precedes it.
While the matching text and its prefix/suffix can span across block-boundaries, the individual parameters to these steps cannot. That is, each of [=text directive/prefix=], [=text directive/start=], [=text directive/end=], and [=text directive/suffix=] will only match text within a single block.
:~:text=The
quick,lazy
dog
will fail to match in ```
The
quick brown fox
jumped over the lazy dog
``` because the starting string "The quick" does not appear within a single, uninterrupted block. The instance of "The quick" in the document has a block element between "The" and "quick". It does, however, match in this example: ```
The quick brown fox
jumped over the lazy dog
```
DOMFAROLINO FROM HERE (EARLY)

To find a range from a text directive , given a text directive parsedValues and Document document , run the following steps:

  1. Let searchRange be a range with start ( document , 0) and end ( document , document 's length ).

  2. While searchRange is not collapsed :

    1. Let potentialMatch be null.

    2. If parsedValues 's prefix is not null:

      1. Let prefixMatch be the result of running the find a string in range steps with parsedValues 's prefix , searchRange , true, false, and false.

      2. If prefixMatch is null, then return null.

      3. Set searchRange 's start to the first boundary point after prefixMatch 's start .

      4. Let matchRange be a range whose start is prefixMatch 's end and end is searchRange 's end .

      5. Advance matchRange 's start to the next non-whitespace position .

      6. If matchRange is collapsed return null.

        This can happen if prefixMatch 's end or its subsequent non-whitespace position is at the end of the document.
      7. Assert : matchRange 's start node is a Text node.

        matchRange 's start now points to the next non-whitespace text data following a matched prefix.
      8. Let mustEndAtWordBoundary be true if parsedValues 's end is non-null or parsedValues 's suffix is null, false otherwise.

      9. Set potentialMatch to the result of running the find a string in range steps with parsedValues 's start , matchRange , false, mustEndAtWordBoundary , and true.

      10. If potentialMatch is null, then continue .

        In this case, we found a prefix but it was followed by something other than a matching text so we'll continue searching for the next instance of prefix .
    3. Otherwise:

      1. Let mustEndAtWordBoundary be true if parsedValues 's end is non-null or parsedValues 's suffix is null, false otherwise.

      2. Set potentialMatch to the result of running the find a string in range steps with parsedValues 's start , searchRange , true, mustEndAtWordBoundary , and false.

      3. If potentialMatch is null, return null.

      4. Set searchRange 's start to the first boundary point after potentialMatch 's start .

    4. Let rangeEndSearchRange be a range whose start is potentialMatch 's end and whose end is searchRange 's end .

    5. While rangeEndSearchRange is not collapsed :

      1. If parsedValues 's end item is non-null, then:

        1. Let mustEndAtWordBoundary be true if parsedValues 's suffix is null, false otherwise.

        2. Let endMatch be the result of running the find a string in range steps with parsedValues 's end , rangeEndSearchRange , true, mustEndAtWordBoundary , and false.

        3. If endMatch is null then return null.

        4. Set potentialMatch 's end to endMatch 's end .

      2. Assert : potentialMatch is non-null, not collapsed and represents a range exactly containing an instance of matching text.

      3. If parsedValues 's suffix is null, return potentialMatch .

      4. Let suffixRange be a range with start equal to potentialMatch 's end and end equal to searchRange 's end .

      5. Advance suffixRange 's start to the next non-whitespace position .

      6. Let suffixMatch be result of running the find a string in range steps with parsedValues 's suffix , suffixRange , false, true, and true.

      7. If suffixMatch is non-null, return potentialMatch .

      8. If parsedValues 's end is null and suffixMatch is null, then break .

        If this is an exact match and the suffix doesn't match, start searching for the next range start by breaking out of this loop without rangeEndSearchRange being collapsed. If we're looking for a range match, we'll continue iterating this inner loop since the range start will already be correct.
      9. Set rangeEndSearchRange 's start to potentialMatch 's end .

        Otherwise, it is possible that we found the correct range start, but not the correct range end. Continue the inner loop to keep searching for another matching instance of rangeEnd.
    6. If rangeEndSearchRange is collapsed :

      1. Assert : parsedValues 's end item is non-null.

      2. Return null.

        This can only happen for range matches due to the break for exact matches in step 9 of the above loop. If we couldn't find a valid rangeEnd+suffix pair anywhere in the doc then there's no possible way to make a match.
  3. Return null.

To advance a range range 's start to the next non-whitespace position , run these steps:

  1. While range is not collapsed:

    1. Let node be range 's start node .

    2. Let offset be range 's start offset .

    3. If node is part of a non-searchable subtree or if node is not a visible text node or if offset is equal to node 's length then:

      1. Set range 's start node to the next node, in shadow-including tree order .

      2. Set range 's start offset to 0.

      3. Continue .

    4. If the substring data of node at offset offset and count 6 is equal to the string " &nbsp; " then:

      1. Add 6 to range 's start offset .

    5. Otherwise, if the substring data of node at offset offset and count 5 is equal to the string " &nbsp; " then:

      1. Add 5 to range 's start offset .

    6. Otherwise:

      1. Let cp be the code point at the offset index in node 's data .

      2. If cp does not have the White_Space property set, then return.

      3. Add 1 to range 's start offset .

To find a string in range given a string query , a range searchRange , and three booleans wordStartBounded , wordEndBounded and matchMustBeAtBeginning , run these steps:

This algorithm will return a range that represents the first instance of the query text that is fully contained within searchRange , optionally restricting itself to matches that start or end at word boundaries (see [[#word-boundaries]]). Returns null if none is found.

The basic premise of this algorithm is to walk all searchable text nodes within a block, collecting them into a list. The list is then concatenated into a single string in which we can search, using the node list to determine offsets with a node so we can return a range .

Collection breaks when we hit a block node, e.g. searching over this tree:
abc
d
e

Will perform a search on " abc ", then on " d ", then on " e ".

Thus, query will only match text that is continuous (i.e., uninterrupted by a block-level container) within a single block-level container.

  1. While searchRange is not collapsed :

    1. Let curNode be searchRange 's start node .

    2. If curNode is part of a non-searchable subtree :

      1. Set searchRange 's start node to the next node, in shadow-including tree order , that isn't a shadow-including descendant of curNode .

      2. Set searchRange 's start offset to 0.

      3. Continue .

    3. If curNode is not a visible text node :

      1. Set searchRange 's start node to the next node, in shadow-including tree order , that is not a DocumentType node.

      2. Set searchRange 's start offset to 0.

      3. Continue .

    4. Let blockAncestor be the nearest block ancestor of curNode .

    5. Let textNodeList be a list of Text nodes, initially empty.

    6. While curNode is a shadow-including descendant of blockAncestor and the position of the boundary point ( curNode , 0) is not after searchRange 's end :

      1. If curNode has block-level display , then break .

      2. If curNode is search invisible :

        1. Set curNode to the next node, in shadow-including tree order , that isn't a shadow-including descendant of curNode .

        2. Continue .

      3. If curNode is a visible text node then append it to textNodeList .

      4. Set curNode to the next node in shadow-including tree order .

    7. Run the find a range from a node list steps given query , searchRange , textNodeList , wordStartBounded , wordEndBounded and matchMustBeAtBeginning as input. If the resulting range is not null, then return it.

    8. If matchMustBeAtBeginning is true, return null.

    9. If curNode is null, then break .

    10. Assert : curNode is following searchRange 's start node .

    11. Set searchRange 's start to the boundary point ( curNode , 0).

  2. Return null.

A node is search invisible if it is an element in the HTML namespace and meets any of the following conditions:

A node is part of a non-searchable subtree if it is or has a shadow-including ancestor that is search invisible .

A node is a visible text node if it is a Text node, the computed value of its parent element 's 'visibility' property is 'visible', and it is being rendered .

A node has block-level display if it is an element and the computed value of its 'display' property is any of 'block', 'table', 'flow-root', 'grid', 'flex', 'list-item'.

To find the nearest block ancestor of a node follow the steps:

  1. Let curNode be node .

  2. While curNode is non-null.

    1. If curNode is not a Text node and it has block-level display then return curNode .

    2. Otherwise, set curNode to curNode 's parent .

  3. Return node 's node document 's document element .

To find the first common ancestor of two nodes nodeA and nodeB , follow these steps:

  1. Let commonAncestor be nodeA .

  2. While commonAncestor is non-null and is not a shadow-including inclusive ancestor of nodeB , let commonAncestor be commonAncestor 's shadow-including parent .

  3. Return commonAncestor .

To find the shadow-including parent of node follow these steps:

  1. If node is a shadow root , then return node 's host .

  2. Otherwise, return node 's parent .

To find a range from a node list given a search string queryString , a range searchRange , a list of Text nodes nodes , and booleans wordStartBounded , wordEndBounded and matchMustBeAtBeginning , follow these steps:

  1. Let searchBuffer be the concatenation of the data of each item in nodes .

    data is not correct here since that's the text data as it exists in the DOM. This algorithm means to run over the text as rendered (and then convert back to Ranges in the DOM). See this issue .

  2. Let searchStart be 0.

  3. If the first item in nodes is searchRange 's start node , then set searchStart to searchRange 's start offset .

  4. Let start and end be boundary points , initially null.

  5. Let matchIndex be null.

  6. While matchIndex is null.

    1. Set matchIndex to the index of the first instance of queryString in searchBuffer , starting at searchStart . The string search must be performed using a base character comparison, or the primary level , as defined in [[!UTS10]].

      Note: Intuitively, this is a case-insensitive search also ignoring accents, umlauts, and other marks.

    2. If matchIndex is null, return null.

    3. If matchMustBeAtBeginning is true and matchIndex is not 0, return null.

    4. Let endIx be matchIndex + queryString 's length .

      Note: endIx is the index of the last character in the match + 1.

    5. Set start to the boundary point result of get boundary point at index matchIndex run over nodes with isEnd false.

    6. Set end to the boundary point result of get boundary point at index endIx run over nodes with isEnd true.

    7. If wordStartBounded is true and matchIndex is not at a word boundary in searchBuffer , given the language from start 's node as the locale ; or wordEndBounded is true and matchIndex + queryString 's length is not at a word boundary in searchBuffer , given the language from end 's boundary point/node as the locale :

      1. Set searchStart to matchIndex + 1.

      2. Set matchIndex to null.

  7. Let endInset be 0.

  8. If the last item in nodes is searchRange 's end node then set endInset to ( searchRange 's end node 's length searchRange 's end offset ).

    endInset is the offset from the last position in the last node in the reverse direction. Alternatively, it is the length of the node that's not included in the range.

  9. If matchIndex + queryString 's length is greater than searchBuffer 's length − endInset return null.

    If the match runs past the end of the search range, return null.
  10. Assert : start and end are non-null, valid boundary points in searchRange .

  11. Return a range with start start and end end .

Optionally, this will only return a match if the matched text begins or ends on a word boundary . For example, the query string "range" will always match in "mountain range", but:

  1. When requiring a word boundary at the beginning, it will not match in "color orange".

  2. When requiring a word boundary at the end, it will not match in "forest ranger".

See [[#word-boundaries]] for details and more examples.

Optionally, this will only return a match if the matched text is at the beginning of the node list.

To get boundary point at index , given an integer index , list of Text nodes nodes , and a boolean isEnd , follow these steps:

This is a small helper routine used by the steps above to determine which node a given index in the concatenated string belongs to.

isEnd is used to differentiate start and end indices. An end index points to the "one-past-last" character of the matching string. If the match ends at node boundary, we want the end offset to remain within that node, rather than the start of the next node.

  1. Let counted be 0.

  2. For each curNode of nodes :

    1. Let nodeEnd be counted + curNode 's length .

    2. If isEnd is true, add 1 to nodeEnd .

    3. If nodeEnd is greater than index , then return the boundary point ( curNode , index counted ).

    4. Increment counted by curNode 's length .

  3. Return null.

7.8.7.3 Word boundaries

Limiting matching to word boundaries is one of the mitigations to limit cross-origin information leakage.

See Intl.Segmenter , a proposal to specify unicode segmentation, including word segmentation. Once specified, this algorithm can be improved by making use of the Intl.Segmenter API for word boundary matching.

A word boundary is defined in [[!UAX29]] in [[UAX29#Word_Boundaries]]. [[UAX29#Default_Word_Boundaries]] defines a default set of what constitutes a word boundary, but as the specification mentions, a more sophisticated algorithm should be used based on the locale.

Dictionary-based word bounding should take specific care in locales without a word-separating character. E.g. In English, words are separated by the space character (' '); however, in Japanese there is no character that separates one word from the next. In such cases, and where the alphabet contains fewer than 100 characters, the dictionary must not contain more than 20% of the alphabet as valid, one-letter words.

A locale is a string containing a valid [[BCP47]] language tag, or the empty string. An empty string indicates that the primary language is unknown.

A substring is word bounded in a string text , given locales startLocale and endLocale , if both the position of its first character is at a word boundary given startLocale , and the position after its last character is at a word boundary given endLocale .

A number position is at a word boundary in a string text , given a locale locale , if, using locale , either a word boundary immediately precedes the position th code unit, or text 's length is more than 0 and position equals either 0 or text 's length.

Intuitively, a substring is word bounded if it neither begins nor ends in the middle of a word.

In languages with a word separator (e.g., " " space) this is (mostly) straightforward; though there are details covered by the above technical reports such as new lines, hyphenations, quotes, etc.

Some languages do not have such a separator (notably, Chinese/Japanese/Korean). Languages such as these requires dictionaries to determine what a valid word in the given locale is.

Text fragments are restricted such that match terms, when combined with their adjacent context terms, are word bounded. For example, in an exact search like prefix,start,suffix , "prefix+start+suffix" will match only if the entire result is word bounded. However, in a range search like prefix,start,end,suffix , a match is found only if both "prefix+start" and "end+suffix" are word bounded.

The goal is that a third-party must already know the full tokens they are matching against. A range match like start,end must be word bounded on the inside of the two terms; otherwise a third party could use this repeatedly to try and reveal a token (e.g., on a page with "Balance: 123,456 $" , a third-party could set prefix="Balance: ", end="$" and vary start to try and guess the numeric token one digit at a time).

For more details, refer to the Security Review Doc .

The substring "mountain range" is word bounded within the string " An impressive mountain range " but not within " An impressive mountain ranger ".
In the Japanese string " ウィキペディアへようこそ " (Welcome to Wikipedia), " ようこそ " (Welcome) is considered word-bounded but " ようこ " is not.

7.8.8 Generating text fragment directives

This section is non-normative.

This section contains recommendations for UAs automatically generating URLs with a text directive . These recommendations aren't normative but are provided to ensure generated URLs result in maximally stable and usable URLs.

7.8.8.1 Prefer exact matching to range-based

The match text can be provided either as an exact string " text=foo%20bar%20baz " or as a range " text=foo,bar ".

Prefer to specify the entire string where practical. This ensures that if the destination page is removed or changed, the intended destination can still be derived from the URL itself.

Suppose we wish to craft a URL to https://en.wikipedia.org/wiki/History_of_computing quoting the sentence:
      The first recorded idea of using digital electronics for computing was the
      1931 paper "The Use of Thyratrons for High Speed Automatic Counting of
      Physical Phenomena" by C. E. Wynn-Williams.
We could create a range-based match like so: https://en.wikipedia.org/wiki/History_of_computing#:~:text=The%20first%20recorded,Williams Or we could encode the entire sentence using an exact match term: https://en.wikipedia.org/wiki/History_of_computing#:~:text=The%20first%20recorded%20idea%20of%20using%20digital%20electronics%20for%20computing%20was%20the%201931%20paper%20%22The%20Use%20of%20Thyratrons%20for%20High%20Speed%20Automatic%20Counting%20of%20Physical%20Phenomena%22%20by%20C.%20E.%20Wynn-Williams The range-based match is less stable, meaning that if the page is changed to include another instance of "The first recorded" somewhere earlier in the page, the link will now target an unintended text snippet. The range-based match is also less useful semantically. If the page is changed to remove the sentence, the user won't know what the intended target was. In the exact match case, the user can read, or the UA can surface, the text that was being searched for but not found.

Range-based matches can be helpful when the quoted text is excessively long and encoding the entire string would produce an unwieldy URL.

Text snippets shorter than 300 characters are encouraged to be encoded using an exact match. Above this limit, the UA can encode the string as a range-based match.

TODO: Can we determine the above limit in some less arbitrary way?

7.8.8.2 Use context only when necessary

Context terms allow the text directive to disambiguate text snippets on a page. However, their use can make the URL more brittle in some cases. Often, the desired string will start or end at an element boundary. The context will therefore exist in an adjacent element. Changes to the page structure could invalidate the text directive since the context and match text will no longer appear to be adjacent.

Suppose we wish to craft a URL for the following text:

      <div class="section">HEADER</div>
      <div class="content">Text to quote</div>
We could craft the text directive as follows:
      text=HEADER-,Text%20to%20quote
However, suppose the page changes to add a "[edit]" link beside all section headers. This would now break the URL.

Where a text snippet is long enough and unique, a UAs are encouraged to avoid adding superfluous context terms.

Use context only if one of the following is true:

TODO: Determine the numeric limit above in less arbitrary way.

7.8.8.3 Determine if fragment ID is needed

When the UA navigates to a URL containing a text directive , it will fallback to scrolling into view a regular element-id based fragment if it exists and the text fragment isn't found.

This can be useful to provide a fallback, in case the text in the document changes, invalidating the text directive .

Suppose we wish to craft a URL to https://en.wikipedia.org/wiki/History_of_computing quoting the sentence:

      The earliest known tool for use in computation is the Sumerian abacus

By specifying the section that the text appears in, we ensure that, if the text is changed or removed, the user will still be pointed to the relevant section:

https://en.wikipedia.org/wiki/History_of_computing#Early_computation:~:text=The%20earliest%20known%20tool%20for%20use%20in%20computation%20is%20the%20Sumerian%20abacus

However, UAs should take care that the fallback element-id fragment is the correct one:

Suppose the user navigates to https://en.wikipedia.org/wiki/History_of_computing#Early_computation. They now scroll down to the Symbolic Computations section. There, they select a text snippet and choose to create a URL to it:

      By the late 1960s, computer systems could perform symbolic algebraic
      manipulations

Even though the current URL of the page is: https://en.wikipedia.org/wiki/History_of_computing#Early_computation, using #Early_computation as a fallback is inappropriate. If the above sentence is changed or removed, the page will load in the #Early_computation section which could be quite confusing to the user.

If the UA cannot reliably determine an appropriate fragment to fallback to, it should remove the fragment id from the URL:

https://en.wikipedia.org/wiki/History_of_computing#:~:text=By%20the%20late%201960s,%20computer%20systems%20could%20perform%20symbolic%20algebraic%20manipulations
domfarolino FROM HERE

7.8.9 Security and privacy considerations

Care must be taken when implementing text directives so that it cannot be used to exfiltrate information across origins. Scripts can navigate a page to a cross-origin URL with a text directive . If a malicious actor can determine that the text fragment was successfully found in victim page as a result of such a navigation, they can infer the existence of any text on the page.

This section describes some of the attacks that could be executed with the help of text directives , and the navigation processing model changes that restrict this feature to mitigate said attacks. In summary, text directives are restricted to:

7.8.9.1 Scroll on navigation

A UA may choose to automatically scroll a matched text passage into view. This can be a convenient experience for the user but does present some risks that implementing UAs need to be aware of.

There are known (and potentially unknown) ways a scroll on navigation might be detectable and distinguished from natural user scrolls.

An origin embedded in an iframe in the target page registers an IntersectionObserver and determines in the first 500ms of page load whether a scroll has occurred. This scroll can be indicative of whether the text fragment was successfully found on the page.

Two users share the same network on which traffic is visible between them. A malicious user sends the victim a link with a text fragment to a page. The searched-for text appears nearby to a resource located on a unique (on the page) domain. The attacker might be able to infer the success or failure of the fragment search based on the order of requests for DNS lookup.

An attacker sends a link to a victim, sending them to a page that displays a private token. The attacker asks the victim to read back the token. Using a text fragment, the attacker gets the page to load for the victim such that warnings about keeping the token secret are scrolled out of view.

All known cases like this rely on specific circumstances about the target page so don't apply generally. With additional restrictions about when the text fragment can invoke an attacker is further restricted. Nonetheless, different UAs can come to different conclusions about whether these risks are acceptable. UAs need to consider these factors when determining whether to scroll as part of navigating to a text fragment.

Conforming UAs may choose not to scroll automatically on navigation. Such UAs may, instead, provide UI to initiate the scroll ("click to scroll") or none at all. In these cases UA should provide some indication to the user that an indicated passage exists further down on the page.

The examples above illustrate that in specific circumstances, it can be possible for an attacker to extract 1 bit of information about content on the page. However, care must be taken so that such opportunities cannot be exploited to extract arbitrary content from the page by repeating the attack. For this reason, restrictions based on user activation and browsing context isolation are very important and must be implemented.

Browsing context isolation ensures that no other document can script the target document which helps reduce the attack surface. However, it also ensures any malicious use is difficult to hide. A browsing context that's the only one in a group will be a top level browsing context (i.e. a full tab/window).

If a UA does choose to scroll automatically, it must ensure no scrolling is performed while the document is in the background (for example, in an inactive tab). This ensures any malicious usage is visible to the user and prevents attackers from trying to secretly automate a search in background documents.

If a UA chooses not to scroll automatically, it must scroll a fallback element-id into view, if provided, regardless of whether a text fragment was matched. Not doing so would allow detecting the text fragment match based on whether the element-id was scrolled.

7.8.9.2 Search timing

A naive implementation of the text search algorithm could allow information exfiltration based on runtime duration differences between a matching and non- matching query. If an attacker could find a way to synchronously navigate to a text directive -invoking URL, they would be able to determine the existence of a text snippet by measuring how long the navigation call takes.

The restrictions in [[#restricting-the-text-fragment]] prevent this specific case; in particular, the no-same-document-navigation restriction. However, these restrictions are provided as multiple layers of defence.

For this reason, the implementation must ensure the runtime of [[#navigating-to-text-fragment]] steps does not differ based on whether a match has been successfully found .

This specification does not specify exactly how a UA achieves this as there are multiple solutions with differing tradeoffs. For example, a UA may continue to walk the tree even after a match is found in [=find a range from a text directive=]. Alternatively, it may schedule an asynchronous task to find and set the [=/Document=]'s indicated part.

7.8.9.3 Restricting the text fragment
7.8.9.4 Restricting scroll on load

7.9 The ` Refresh ` header

The ` Refresh ` HTTP response header is the HTTP-equivalent to a meta element with an http-equiv attribute in the Refresh state . It takes the same value and works largely the same. Its processing model is detailed in create and initialize a Document object .

Browser user agents should provide the ability to navigate , reload , and stop loading any top-level traversable in their top-level traversable set .

For example, via a location bar and reload/stop button UI.

Browser user agents should provide the ability to traverse by a delta any top-level traversable in their top-level traversable set .

For example, via back and forward buttons, possibly including long-press abilities to change the delta.

It is suggested that such user agents allow traversal by deltas greater than one, to avoid letting a page "trap" the user by stuffing the session history with spurious entries. (For example, via repeated calls to history.pushState() or fragment navigations .)

Some user agents have heuristics for translating a single "back" or "forward" button press into a larger delta, specifically to overcome such abuses. We are contemplating specifying these heuristics in issue #7832 .

Browser user agents should offer users the ability to create a fresh top-level traversable , given a user-provided or user agent-determined initial URL .

For example, via a "new tab" or "new window" button.

Browser user agents should offer users the ability to arbitrarily close any top-level traversable in their top-level traversable set .

For example, by clicking a "close tab" button.


Browser user agents may provide ways for the user to explicitly cause any navigable (not just a top-level traversable ) to navigate , reload , or stop loading .

For example, via a context menu.

Browser user agents may provide the ability for users to destroy a top-level traversable .

For example, by force-closing a window containing one or more such top-level traversables .


When a user requests a reload of a navigable whose active session history entry 's document state 's resource is a POST resource , the user agent should prompt the user to confirm the operation first, since otherwise transactions (e.g., purchases or database modifications) could be repeated.

When a user requests a reload of a navigable , user agents may provide a mechanism for ignoring any caches when reloading.


All calls to navigate initiated by the mechanisms mentioned above must have the userInvolvement argument set to " browser UI ".

All calls to reload initiated by the mechanisms mentioned above must have the userInvolvement argument set to " browser UI ".

All calls to traverse the history by a delta initiated by the mechanisms mentioned above must not pass a value for the sourceDocument argument.


The above recommendations, and the data structures in this specification, are not meant to place restrictions on how user agents represent the session history to the user.

For example, although a top-level traversable 's session history entries are stored and maintained as a list, and the user agent is recommended to give an interface for traversing that list by a delta , a novel user agent could instead or in addition present a tree-like view, with each page having multiple "forward" pages that the user can choose between.

Similarly, although session history for all descendant navigables is stored in their traversable navigable , user agents could present the user with a more nuanced per- navigable view of the session history.


Browser user agents may use a top-level browsing context 's is popup boolean for the following purposes:

In both cases user agents might additionally incorporate user preferences, or present a choice as to whether to go down the popup route.

User agents that provide a minimal user interface for such popups are encouraged to not hide the browser's location bar.