Scope Analysis Guide

← Back to Help

Go to Scope Analysis →

The Scope Analysis page supplements AI/ML IP search, trends, and IP overview by providing freedom-to-operate (FTO) and infringement-risk screening. Input a natural language description of subject matter of interest (e.g., product features, invention disclosures, draft claims, etc.) and run a semantically comparison against independent claims across patents in the SynapseIP database. Closest matches with context-rich analysis is returned.

What is Scope Analysis?

Traditional search tools focus on matching titles or abstracts. Scope Analysis dives into claim language, which ultimately determines infringement exposure. Each independent claim in the SynapseIP database is embedded and indexed. Features:

  • Returns patents with claim scopes semantically closest to the input subject matter.
  • Quantifies proximity using cosine distance and similarity percentage.
  • Presents quickly and easily comprehensible visual indicators of risk, including assignee information.
  • Full text of semantically similar claims. Direct links to full patents.
  • Export capability for all results in PDF format.

This workflow meaningfully reduces the cost and time of a formal infringement assessment.

How Scope Analysis works

  1. User input: Provide up to ~20k characters describing the feature(s) or claim(s) to clear. Embedding quality improves with richer technical detail.
  2. Embedding generation: SynapseIP generates an OpenAI embedding for the submitted text (no data is stored beyond what is required to fulfill the request).
  3. KNN search: Generated embedding is compared against embeddings generated from independent claims of the patents in the SynapseIP database. Closest matches (top-k configurable) are returned.
  4. Visualization + evidence: Results populate both the similarity map and the evidence table to concurrently provide both macro and micro views.

Note: Start with 10–20 closest claims for rapid results. Increase # of claim comparisons input to broaden coverage.

Example workflow

1. Describe subject matter

Draft 1-3 concise paragraphs capturing inventive concepts, functionalities, and technical details. Include language tied to technical components so the embedding has enough signal.

2. Choose sampling depth

Use the '# of claim comparisons' input to specify the number independent claims to be returned. Default is 15; expanding to 40-50 may be useful initial claim comparisons are near misses.

3. Run the analysis

Click 'Run scope analysis' to execute embeddings search + KNN graphing operations. Results are returned with similarity scores, graph positioning, and risk tiles tailored to that query.

4. Inspect the graph

Hover nodes to preview claim snippets, click to highlight a specific patent, identify patents clustered around specific assignees.

5. Review supporting claims

In the table, click any claim cell to expand the full text. Patent numbers link to Google Patents to view full documents.

6. Export Results

Results table can be exported as a PDF document for offline reference and review.

Interpreting the graph & table

Radial layout. The input text sits in the center. Nodes closer to the center represent higher similarity (lower cosine distance). The updated radius scaling exaggerates separation so critical risks pop immediately.

Tooltip previews. Hover any node to see the patent title and first 200 characters of the matched claim.

Selection sync. Graph, summary tiles, and claim text are synchronized. Clicking a node or claim row highlights both views.

Similarity column. Percent values are derived from 1 − distance. Scores ≥ 70% may indicate high overlap risks; 55–69% indicates moderate overlap; <50% is generally lower risk but may be relevant.

Expandable claim text. Click the claim snippet to read the entire independent claim text inline.

Tips & troubleshooting

  • Not enough matches? Increase the # of claim comparisons slider or broaden the description with additional functional detail.
  • Mixed technology stack? Run separate analyses for each subsystem (e.g., hardware vs. software) to isolate where overlap risk concentrates.
  • Need attribution? Use the table's assignee column to see which entities own the closest claims, then prioritize outreach or monitoring accordingly.
  • Want to preserve evidence? Capture screenshots or export the table via copy/paste. A formal export workflow is on the roadmap.
  • Still unsure? Email support@phaethon.llc with the query context (no confidential data required) and we can help interpret the output.